TL;DR: A 28-person software consultancy in Manchester was calculating revenue recognition manually across 14 concurrent projects with four contract types. The finance director cross-referenced three disconnected systems (project management, contracts, accounting) into a spreadsheet he'd built four years ago when the firm had 6 projects and two contract types. Month-end close consumed 1.5 days. Three restatements in 24 months, all timing errors from stale data. We built a four-stage agent that consolidates data daily, encodes contract recognition rules per project type, calculates monthly recognition with discrepancy flags, and reconciles against invoiced and received amounts. Month-end: 1.5 days down to 2 hours. Restatements: zero in the first two quarters. Agent cost: £220/month.

Month-End

Every month-end, James closes his office door.

Fourteen active projects. Four contract types: fixed-price (milestone-based recognition), time-and-materials (hourly rate, recognised on approval), retainer (straight-line with quarterly true-ups), and blended (fixed deliverables plus ad-hoc time, split into two components and recognised under different rules).

He opens the spreadsheet. He opens Teamwork. He opens the contracts folder on SharePoint. He opens Xero.

For each project, he determines: which milestones completed this month (checking Teamwork, sometimes calling the project manager because the status hasn't been updated, sometimes getting a confident answer and sometimes getting a pause that means "I'll update it now"). What percentage of fixed-price work is done (hours burned against budget, cross-referenced with the PM's estimate of actual completion, which is sometimes optimistic and sometimes genuinely forgotten). Which T&M hours were logged, approved, and invoiced. Which retainer periods elapsed.

He enters numbers into his spreadsheet. He reconciles against Xero. He produces the recognition schedule.

One and a half days. Every month. Three data sources that hold no awareness of each other's existence, bridged by one man and a spreadsheet held together by institutional knowledge and four years of accumulated tab structures (no two tabs identical, understood only by James, formatted in a style that future archaeologists would find challenging).

The spreadsheet was built when the firm had six projects and two contract types. The firm now has 14 projects and four contract types. The spreadsheet was never redesigned. It just grew. This is, when you think about it, a revenue recognition system whose accuracy depends on one person's ability to hold 14 sets of contract terms in his head while simultaneously cross-referencing three systems that don't share a single data field. Which is a description of a process, not a compliment of one.

Last year, this process produced three restatements. James can do maths. The maths wasn't the problem. The milestone data was three weeks old by the time he used it.

The Firm

Software consulting firm in Manchester, UK. Twenty-eight employees. Annual revenue: £2.1M across 12 to 16 concurrent projects at any given time. Professional services. Fixed-price builds, T&M staff augmentation, retainer support contracts, and blended engagements that combine elements of all three.

James is the finance director. His loaded cost is roughly £45 per hour. He spends approximately 12 hours (1.5 working days) on revenue recognition per month-end close. That's £540 per month, £6,480 per year, in James's time alone.

The partners spend additional time reviewing and attesting: roughly £3,200 per year. Total annual cost of the manual recognition process: approximately £9,680 in direct labour.

The three restatements in 24 months add a harder-to-quantify cost: board confidence, audit risk, and the opportunity cost of James spending 18 days per year on month-end close administration instead of financial analysis, forecasting, and the strategic work the firm hired a finance director to do. Estimated total annual cost: £12,000 to £18,000.

The tool stack: Teamwork for project management (project status, milestones, timesheet data). SharePoint for document storage (contracts with billing structures, payment terms, statements of work). Xero for accounting (invoiced amounts, payment receipts, journal entries). And James's spreadsheet, which serves as the bridge between the three. Built four years ago. One tab per project. No integration. No automation. No backup plan for the day James is ill during month-end (which happened once; the close was delayed by four days and the partners described the experience as "educational").

What Existed Before

James's recognition process follows a pattern that would be familiar to any finance professional in project-based services, and troubling to anyone who examined it closely.

He opens each project tab in his spreadsheet. He checks Teamwork for milestone completions. For fixed-price projects, he needs percentage-of-completion data, which requires hours burned against budget, cross-referenced with the PM's assessment of actual progress. These two numbers should agree. They often don't. When they disagree, James calls the PM. The PM sometimes knows. Sometimes estimates. Sometimes says "let me check and get back to you" (which, during month-end, means James notes the discrepancy and moves on, returning to it if he remembers, which he sometimes doesn't).

For T&M projects, he checks timesheet approvals in Teamwork and reconciles against Xero. Timesheets are submitted weekly in policy. Fortnightly in practice. James recognises revenue based on approved hours, which means the recognition is only as current as the approvals.

For retainers, straight-line recognition with quarterly true-ups. For blended contracts, he splits the engagement into fixed and variable components, each recognised under different rules, a process that requires re-reading the contract PDF because the split ratios aren't stored anywhere except in the contract itself and James's memory.

The whole process takes 1.5 days. The data ranges from "updated yesterday" to "updated three weeks ago." The restatements were all timing errors. Revenue recognised based on milestones assumed complete that weren't yet confirmed. The calculations were correct. The inputs were stale. The Monday morning bottleneck here isn't a single morning; it's a day and a half of every month-end, building a picture from fragments.

What We Designed

Four stages. All designed on top of the existing stack.

Stage 1: Data consolidation

The agent connects to Teamwork (projects, milestones, timesheets), SharePoint (contracts, billing terms), and Xero (invoiced amounts, payments received). It pulls data daily. Not monthly. Daily.

This sounds simple. It is simple. It's also the single change that eliminates most of the problem. James's three restatements were timing errors. With daily data, timing errors don't accumulate to the point of requiring restatement. The milestone that was assumed complete but wasn't? The agent knows its current status as of last night, not as of three weeks ago.

Stage 2: Contract logic engine

Each project's recognition rules, encoded from the contract. Fixed-price: percentage-of-completion based on hours and milestone triggers. T&M: approved hours multiplied by rate. Retainer: straight-line with true-up triggers. Blended: split into components, each recognised under its own rules.

James walked through each contract type once during setup. Two hours. The agent applies the rules continuously. The split ratios that James used to look up in contract PDFs are now encoded per project.

Stage 3: Monthly recognition calculation

At month-end, the agent produces the recognition schedule: revenue to recognise per project, cumulative recognised to date, remaining contract value, variances from prior month. Unconfirmed milestones get flagged (not assumed complete, which was the source of two of the three restatements). Hours variance against budget gets flagged. Billing pace diverging from recognition pace gets flagged.

James reviews the flagged items. Typically four to six items per month. The clean items calculate automatically. James handles the judgment calls. The agent handles the arithmetic and the data retrieval.

Stage 4: Reconciliation and reporting

Compares recognised revenue against invoiced and received amounts. Flags deferred revenue, unbilled revenue, and contract-level variances. Produces Xero journal entry data and a board-ready summary with variance commentary.

The board pack that used to take James a full day to assemble from the recognition spreadsheet now generates from the agent's output in under an hour.

What We Learned

Daily sync eliminated the root cause, not the symptoms. The three restatements were all timing errors. Stale milestone data. Late timesheet approvals. The agent didn't calculate differently from James. It calculated from current data. The same formula applied to fresh inputs produces a different (correct) answer. The fix was the freshness, not the logic.

The contract logic encoding took one session. Two hours with James. Four contract types. He walked through each recognition method, the edge cases, and the rules he applied that weren't written down anywhere (the blended contracts had an informal rule about when to split recognition that James had developed over time and never documented). The encoding captured rules that would have been lost if James left. The firm now has documented recognition policies for each contract type. They didn't before.

Six milestone discrepancies were caught in the first two months that would have been assumed complete under the old process. In each case, the PM hadn't updated the milestone status in Teamwork. Under the old process, James would have called, sometimes caught it, sometimes assumed completion based on timeline. The agent flagged instead of assuming. James confirmed with the PM. Four were genuinely complete (just not updated). Two weren't. Those two would have been recognised early. The agent prevented the restatement before it happened.

The partners noticed. The board pack for the first quarter with the agent was described by the managing partner as "the first time I've trusted the revenue numbers without mentally adding a margin of error." Three restatements in two years had eroded confidence in the recognition figures. The board had been applying an informal discount to James's numbers. They stopped.

The Numbers

Metric

Before

After

Month-end recognition time

1.5 days (12 hrs)

2 hours

Data freshness

1-3 weeks old at close

Daily sync

Restatements (24 months prior)

3

0 (first two quarters)

Milestone discrepancies caught

Variable (call the PM)

6 flagged, 2 prevented early recognition

Board confidence in revenue figures

"Mostly right"

Trusted without qualification

Agent cost/month

N/A

£220

Annual savings: £12,000 to £18,000. James's time recovered, partner review time reduced, restatements eliminated, and month-end close compressed from a 1.5-day event to a 2-hour review.

£220 per month. £2,640 per year. Against £12,000 to £18,000 in recovered time and eliminated errors.

James described his first month-end with the agent: "I opened the dashboard instead of the spreadsheet. The numbers were already there. I spent two hours reviewing flags and posting the journal. Then I didn't know what to do with the rest of the day." He spent it on cash flow forecasting. Which is, by any reasonable assessment, a better use of a finance director's time than cross-referencing three systems that should have been connected four years ago.

The Pattern

If you're running a project-based business and your revenue recognition involves one person, a spreadsheet, and three disconnected data sources reconciled monthly, your Monday morning bottleneck isn't a single morning. It's 1.5 days of every month-end, repeated twelve times per year, producing numbers whose accuracy depends on how recently someone updated a milestone status.

Your finance director can do the maths. The maths was never the problem. The data freshness is. And the gap between "data last updated" and "data used for recognition" is where restatements live.

Want to see if your month-end close has a data freshness problem? The AI Workflow Diagnostic takes 10-15 minutes and shows you where stale data is costing you accuracy.

Want to see 25 agent architectures across different industries? Download Unstuck. It includes blueprints for audit prep, expense enforcement, cash flow, documentation, and more.

by TG
for the AdAI Ed. Team

Keep Reading