Why AI Projects Fail (It's Not the AI)
New research from MIND has identified data trust as the make-or-break factor in AI initiative success. Poor data quality, it turns out, undermines even well-resourced programmes. And across the industry, people are nodding along to the obvious conclusion: we need better AI tools, bigger budgets, more capable models.
I think that's the wrong lesson entirely.
The consensus misses the point
The mainstream reading of the AI failure stats is that the technology isn't ready, or isn't good enough yet, or needs more investment to mature. According to TechRadar, 78% of UK firms have now adopted AI - but only 31% report a positive return. More than half have no clear definition of what success looks like.
Read that last part again. More than half have no definition of success. That is not a technology problem.
The problem with blaming the AI is that it assumes the technology is the variable. It isn't. The tools have never been more capable. What's missing is the foundation underneath them - the data quality, the process clarity, the success criteria - the things that need to be in place before you switch anything on.
What's actually going wrong
Here's what I think is actually going on. Businesses are adopting AI in the right order if you're buying new office furniture. Wrong order if you're building anything structural.
Most organisations pick a use case, find a tool, run a pilot, and then worry about data, governance, and measuring impact when the results disappoint. That sequence is backwards. You cannot bolt good data onto a live initiative. You cannot define success retrospectively. And you certainly cannot trust outputs that are built on inputs you haven't verified.
MIND's research makes exactly this point: it is data trust - not model performance - that separates successful AI initiatives from abandoned ones. Gartner predicts 60% of AI projects will be abandoned by 2026 due to lack of AI-ready data, and the programmes that do succeed typically allocate 50-70% of their budget to data readiness before they build anything. The abandonment rate for AI initiatives has more than doubled in a single year - from 17% in 2024 to 42% in 2025. That is not a market cooling on the technology. That is a market learning an expensive lesson about sequencing.
Why do most AI projects fail to deliver ROI?
Because businesses treat data, strategy, and operational readiness as things to sort out after the AI is running - and that is backwards. The AI is not the variable; the foundation is. Until you have clean, trusted data, clear success criteria, and well-defined processes, no tool will perform consistently. The failure is upstream of the technology.
Getting the order right
The bit I want to focus on is what "getting the foundations right" actually means in practice - because it is not as complicated as it sounds.
There are really three things that need to be in place before any AI initiative can deliver reliably. First, you need data you can trust - not perfect data, but data whose quality you understand and can account for. Second, you need a clear answer to the question "how will we know if this is working?" - defined before you start, not inferred from results after the fact. Third, you need the process you're augmenting to be understood and documented, so the AI is improving something that works rather than accelerating something that doesn't.
Think of it as an AI Operating System: strategy, data, and operations come before tooling. Every time. When those three foundations are in place, the technology has something to work with. When they're not, you're spending money to surface the mess faster.
The financial cost of getting this wrong is real. I talk through what that actually looks like for a business's P&L in this video - worth watching before you commit budget to your next initiative.
I could be wrong about where the bottleneck sits. But if the technology keeps improving and the failure rate keeps rising, the most honest interpretation is that the gap is not in the tools. It is in the work that should happen before the tools are switched on. That work - building the AI Operating System underneath your initiatives - is exactly what the AI Leaders Fellowship is designed to help senior leaders do.