Agent Sprawl: The AI Governance Gap Boards Cannot Ignore
AI agents are getting a lot of attention right now. Not as a concept any more, but as a practical reality. McKinsey's latest State of AI Trust report paints a picture of organisations moving quickly from pilots to production-grade agent deployments. OutSystems research published this week found that 96% of organisations are already using AI agents in some capacity. That is not early adoption. That is mainstream.
But here is the bit I want to focus on. Almost nobody is governing these agents properly. And that gap is about to become a serious problem.
The context you need
Agent sprawl is the uncontrolled proliferation of AI agents across an enterprise. It happens when different teams deploy agents to solve their own problems without a shared strategy, shared data infrastructure, or centralised oversight. If that sounds familiar, it should. This is shadow IT all over again, except the agents are not just accessing data. They are making decisions, triggering actions, and interacting with customers and systems autonomously.
In the 2010s, shadow IT meant employees using unapproved cloud tools and personal devices. It created security headaches, but the tools were fundamentally passive. Agent sprawl is different in kind, not just degree. An ungoverned spreadsheet sits there. An ungoverned AI agent acts. It sends emails, processes claims, adjusts pricing, approves requests. The risk profile is categorically higher because the speed and autonomy are categorically higher.
The AI Optimist analysis
There are a few things happening here that are worth unpacking.
First, the governance gap is widening, not closing. McKinsey reports that only about one-third of organisations have reached mature governance levels for AI. That figure has improved from 2025, but the pace of agent deployment has accelerated far faster than the governance infrastructure. OutSystems found that 94% of organisations are concerned about agent sprawl increasing complexity, technical debt, and security risk. The concern is near-universal. The response is not.
Second, there is a genuine accountability vacuum. When a human employee makes a bad decision, the accountability chain is clear. Their manager, their department head, ultimately the board. When an AI agent makes a bad decision, who owns that? The team that deployed it? The vendor that built it? The IT department that approved the platform? In most organisations today, the honest answer is: nobody has decided yet. And that is not a theoretical problem. It is a governance failure that is happening now, in production, at scale.
Third, the organisations getting this right are following a clear sequence. The pattern I keep seeing is: visibility first, then accountability, then autonomy. Visibility means knowing what agents you have, where they operate, what data they access, and what decisions they make. Accountability means assigning clear ownership for each agent's behaviour, with defined escalation paths when something goes wrong. Autonomy comes last, not first, and it is earned through demonstrated reliability within governed boundaries. McKinsey's data supports this. High-performing organisations are far more likely to have defined human-in-the-loop validation processes: 65% compared with 23% for everyone else. That is not a coincidence. The organisations that trust their agents most are the ones that built the governance first.
What connects these three dynamics is a single insight. The speed of agent deployment has outrun the organisational capacity to govern it. And the fix is not to slow down deployment. It is to build governance that can keep pace.
How do you build an AI governance framework for AI agents?
Start with visibility, not policy. Most governance frameworks fail because they begin with rules before anyone knows what they are governing. Map every agent in your organisation: what it does, what data it touches, what decisions it makes, who deployed it. Then assign accountability, giving each agent a named human owner responsible for its behaviour. Only then grant autonomy, and do so incrementally, expanding what agents are permitted to do as your confidence in the governance model grows. The sequence matters. Visibility, accountability, autonomy. In that order.
What this means for boards
If you sit on a board or lead a senior team, there are two questions worth asking right now. The first is simple: how many AI agents are operating across your organisation today, and does anyone have a complete inventory? If the answer is uncertain, you have a visibility problem. The second question is harder: if one of those agents made a consequential error tomorrow, who would be accountable, and what would the response process look like?
These are not technology questions. They are governance questions. And they belong at board level because the risks sit at board level. Agent sprawl does not announce itself. It accumulates quietly until an incident makes it visible, and by then the exposure is significant.
Where this is heading
This is a complex, evolving situation. Regulators are beginning to move. Singapore unveiled the world's first dedicated governance framework for agentic AI at the World Economic Forum earlier this year. The EU AI Act is being reinterpreted through an agentic lens. But regulation will follow practice, not lead it. The organisations that build governance now will shape the standards, not just comply with them.
I talk more about the headspace leaders need to make these kinds of strategic decisions in a recent video. The point is not that this is easy. The point is that slowing down to get governance right is what makes it possible to move faster later. That is the pattern. And it is one worth understanding if you are a board director navigating AI strategy.
The AI Leaders Fellowship is built around exactly this kind of challenge: helping senior leaders build the frameworks that make AI safe enough to scale.