There's a stat from MIT Sloan that has stuck with me since I first read it: roughly 95% of AI pilot projects fail to deliver on their projected financial metrics. Ninety-five percent. If any other category of investment had that failure rate, we'd have abandoned it years ago.
But we haven't abandoned AI. And we shouldn't. The technology works. I've watched it compress hours into minutes, turn messy data into usable insights, and free up smart people to do the work they were actually hired for. So what's going on? Why does the ROI math keep falling apart?
Because we're measuring the wrong things.
The Software Lens Doesn't Fit
Most businesses evaluate AI the same way they evaluate a new software platform. How much did we spend? How many hours did we save? Can we reduce headcount? These are reasonable questions for a CRM migration. They are the wrong questions for AI.
Software replaces a process. AI changes how people work within a process. That's a fundamentally different thing, and it needs a different measurement framework.
When a business deploys AI and measures success by headcount reduction, two things happen. First, the headcount doesn't drop, because the work shifted, not disappeared. Second, leadership declares the pilot a failure. The AI gets shelved. Everyone loses.
What You Should Measure Instead
Time-to-first-value. How fast does someone see a usable result? Not after a six-month implementation. This week, when someone sits down and uses the thing, how quickly do they get something useful? If a new hire can draft a client email in their first week instead of spending three weeks learning templates, that's time-to-first-value.
Decision quality. Are better decisions being made? When your sales team has an AI-generated summary before a call, they ask better questions. When your operations lead has a clean data summary instead of a raw spreadsheet, they catch problems sooner. Sam Ransbotham at Boston College found that organisations measuring "decision improvement" were significantly more likely to scale AI past the pilot phase.
Capacity creation. What can your team do now that they couldn't before? Before AI, your marketing person spent 60% of their week formatting reports. After AI, they spend 20% on that and 40% on actual strategy work. You didn't save money. You unlocked capacity that didn't exist before.
A Simpler Scorecard
When someone asks me "how do I know if this AI thing is working?" I give them the simplest version I can.
Pick a task your team does regularly. Time it. Deploy the AI tool. Time it again.
If something that took four hours now takes thirty minutes, you have your answer. You don't need a dashboard. You need a before number and an after number.
One of our early wins looked exactly like this. A weekly reporting task that consumed most of someone's Friday afternoon got compressed into something they could finish before lunch. From about four hours to under forty minutes. Nobody got laid off. The person now spends Friday afternoons on work that actually moves the business forward.
That's the ROI. It just doesn't show up on a traditional cost-savings spreadsheet.
Stop asking "did this save us money?" and start asking "did this save us time, improve our decisions, or let us do something we couldn't do before?" You'll get a much more honest picture. And honestly, that honest picture is usually pretty good.