What we’re seeing more and more is that this isn’t an adoption problem. It’s a visibility problem. When a CIO says, “we haven’t formally deployed agents yet,” that’s usually true. Nothing has been rolled out in a structured, organisation-wide way but that doesn’t mean it’s not already happening.
AI Agents aren't being deployed. They're emerging.
In most businesses right now, someone has a task that takes too long or feels unnecessarily repetitive. They’ve found a tool, built something small to help with it, and it’s saving them time. It might be something they already had access to, or something they’ve signed up to themselves.
In isolation, that’s exactly the behaviour you want. It’s people finding better ways to work
The challenge is what happens next.
If it works, it spreads. One person shares it, then a couple more start using it, and before long it becomes part of how a team is getting things done. At that point, it’s no longer just personal productivity. The organisation is starting to depend on it, whether it realises it or not and that’s where the risk begins.
Because those agents haven’t been built with any real consideration for how they should operate at an organisational level.
They might be pulling from data that isn’t approved or isn’t current. They might be running through tools or platforms that sit completely outside IT’s visibility. There’s usually no clear ownership, no version control, and no real sense of how that thing should be managed over time. And because they spread quite naturally, those gaps spread with them.
The hidden behaviour behind AI adoption
There’s also a more human side to this that doesn’t get talked about.
If you’ve found a way to save a couple of hours a day using an AI agent, do you talk about it openly? Or do you just quietly use it until you’re sure how it’s going to land?
If you’ve automated part of your role, or even part of someone else’s, do you put your hand up and say that? Or do you keep it to yourself? Most people sit somewhere in the middle.
They’re using these tools with good intent. They’re trying to be more effective, to keep up, to get through what’s on their plate. But there’s often a bit of uncertainty around what it means for their role, their team, and the people around them.

So the behaviour that starts to emerge is what you might call quiet, or “closet,” innovation. Useful things are being built. Genuine improvements are being made. But they’re not always visible to the organisation. This isn’t surprising when you look at the broader context.
Teams are being asked to do more with less. Roles are being stretched and there’s an expectation, whether it’s spoken or not, that people will find ways to be more efficient.
When someone finds a way to take friction out of their day using AI, they’ll take it. Even if that means stepping outside the boundaries of what the organisation thinks is happening.
In a lot of the workshops we run, you ask a room how many people are already using AI in their day-to-day work, and you get a sea of hands. Then you look at the leadership team at the front of the room, and they’re often surprised, because nothing has been formally rolled out.
Individual value. Organisational risk.
There’s also a mismatch in how these things are being judged.
At an individual level, an agent is judged on the output. If it saves time and produces something useful, then it’s doing its job. From an organisational perspective, that’s only part of the picture.
What data is it using and where is that data going? What does it cost when it’s used at scale, what decisions is it influencing, and are we comfortable with that?
An agent can look effective on the surface while introducing risk underneath. And that risk doesn’t usually show up straight away. It builds over time.
We’ve seen versions of this before. Access databases that quietly ended up running whole parts of a business. Spreadsheets that became critical systems that no one fully understood. More recently, low-code apps that spread faster than anyone expected.
Each time, the issue wasn’t that people were doing the wrong thing. It was that something was growing in ways the organisation couldn’t fully see or manage and this feels very similar, just with a different type of technology.
The difference now is that these systems don’t just store or process information. They act on it. They generate outputs, influence decisions, and in some cases interact directly with customers or core business processes which means the stakes are higher.
A more useful question than “how do we roll out AI?” is this:
Are you confident that no one in your organisation has already found their own way to use these tools to get their work done? And if they have, do you have any real visibility of where that’s happening, what those agents are connected to, or how they’re being used?
Most organisations can’t answer that with confidence. And until they can, they’re not really in control of how agents are operating inside the business. They’re just hoping they are.
If any of this feels familiar, it’s probably because you’re already closer to it than you think.
The challenge isn’t deciding whether to adopt agents. It’s understanding how to operate them once they’re already part of how work gets done. That’s where most organisations are starting to feel the gap.
We’ve explored this in more detail in our Agent Operations Centre white paper, which looks at what it actually takes to bring visibility, control, and structure to agent use at scale.
Drew Alexander | Head of Microsoft AI
Drew leads customers through AI‑driven transformation across data, platforms and business systems. He works closely with organisations to turn responsible AI strategy into practical, enterprise‑ready solutions.