When we rolled out Google Gemini across the organisation, the initial reaction wasn't excitement. It was fear, scepticism, and confusion. People thought AI would replace them. They didn't trust the outputs. They didn't know where to start. The technology was powerful and readily available, but that didn't matter. What mattered was whether people would actually use it.
Over six months, I coached every business function on AI adoption. The result was a 4x increase in monthly process invocations. We went from a handful of curious early adopters to organisation-wide engagement. Later, I drove the business case that secured C-suite commitment to AI tooling and rolled out Cursor AI across engineering, boosting developer productivity by 33%. We're now progressing to multi-agent orchestration with Claude Code.
The technology was never the hard part. Changing how people work was where the real challenge lay.
Why Top-Down AI Mandates Fail
Most organisations approach AI adoption the same way: buy a tool, announce it in an all-hands, maybe run a generic training session, then wonder why adoption stalls at 15%. The problem is treating AI like infrastructure rather than a behaviour change initiative.
When leadership mandates AI usage without addressing the underlying concerns, you get compliance theatre. People tick boxes, submit the required screenshots, but revert to their old workflows the moment the spotlight moves elsewhere. They're not being difficult—they're being rational. If you haven't convinced them that this new way of working is better, faster, or more valuable than their current approach, why would they change?
The fear is real and legitimate. People worry about job security. They worry about looking incompetent if they can't figure out the new tools. They worry that AI will expose their lack of technical depth. These aren't concerns you can dismiss with a reassuring email from the CEO. You have to address them directly, repeatedly, and with specificity.
Function-Specific Training, Not Generic Demos
The single biggest mistake I see is running one-size-fits-all AI training. You gather everyone in a room, show them ChatGPT writing a limerick or generating a marketing email, and expect them to extrapolate how this applies to their actual job. It doesn't work.
What worked was designing training function-by-function. Not "here's how AI works in theory", but "here's how you, in sales, can use this to qualify leads faster" or "here's how you, in finance, can automate reconciliation checks". The use cases were specific, practical, and directly tied to the work people were already doing.
For the engineering team, I didn't show them generic code generation examples. I walked them through refactoring an actual module from our codebase using Cursor AI. I demonstrated how to use AI for writing tests, debugging production issues, and exploring unfamiliar libraries. I showed them how to integrate AI into their existing workflows—VS Code extensions, terminal commands, Git hooks—not as a replacement for their skills but as an amplifier.
For non-technical teams, the focus was different. I taught them prompt engineering fundamentals: how to be specific, how to provide context, how to iterate on outputs. More importantly, I taught them how to evaluate AI outputs critically. Just because the AI sounds confident doesn't mean it's correct. Just because it's fast doesn't mean it's good enough to ship. Critical thinking became the core skill, not blind trust.
Removing Misconceptions
One of the biggest barriers to adoption is the mythology around AI. People either overestimate it—expecting magic—or underestimate it, dismissing it as a party trick. Both extremes prevent productive use.
I spent significant time in every training session debunking misconceptions. No, AI won't replace you, but someone using AI might. No, you don't need to be technical to use this effectively. No, it's not cheating to use AI for grunt work—it's strategic. Yes, you still need to check its work. Yes, it will make mistakes, and that's fine if you catch them.
The most effective framing I found was positioning AI as an intern—talented, fast, eager, but requiring direction and oversight. You wouldn't send an intern into a client meeting unsupervised. You wouldn't ship their work without reviewing it. But you also wouldn't refuse to delegate to them out of pride. The same principles apply to AI.
Measuring Adoption Meaningfully
You can't improve what you don't measure, but most organisations measure AI adoption poorly. Tracking "number of logins" or "queries run" tells you almost nothing about whether people are deriving value.
We tracked process invocations—specific, repeatable workflows where AI delivered measurable outcomes. We didn't care if someone asked Gemini a random question. We cared if they used it to draft a customer proposal, generate a financial summary, or automate a data validation check. The 4x increase in monthly process invocations wasn't about curiosity; it was about integration into daily work.
For engineering, we measured productivity differently. We tracked pull request velocity, code review turnaround, and self-reported time savings. The 33% productivity boost from Cursor AI wasn't a vanity metric—it was developers shipping features faster and spending less time on boilerplate.
But metrics alone don't tell the full story. Qualitative feedback mattered just as much. Were people voluntarily sharing AI use cases in standups? Were they asking for more training? Were they experimenting beyond the prescribed workflows? Those signals indicated genuine adoption, not compliance.
Driving the Business Case
None of this happens without C-suite buy-in, and C-suite buy-in requires a business case. Not a fluffy "AI is the future" pitch, but a concrete ROI projection with measurable outcomes.
When I built the case for AI tooling, I focused on three areas: time savings, quality improvements, and competitive differentiation. I quantified the hours saved per week across teams. I projected the revenue impact of faster sales cycles and more accurate forecasting. I highlighted the retention risk of not offering engineers modern tooling.
The key was making it specific. Not "AI will make us more efficient", but "we'll save 15 hours per week in the finance team alone, equivalent to £40k annually". Not "developers will be happier", but "we'll reduce onboarding time for new engineers by 20% and improve retention in a competitive hiring market".
Once leadership saw the numbers and understood the implementation plan, approval was straightforward. The hard part wasn't convincing them AI was valuable—it was showing them we had a realistic plan to capture that value.
The Road Ahead: Multi-Agent Orchestration
We're not done. Cursor AI was a step change for engineering productivity, but the next frontier is multi-agent orchestration. Tools like Claude Code enable workflows where multiple AI agents collaborate on complex tasks—one handling research, another drafting code, a third reviewing for security vulnerabilities.
This isn't science fiction. It's happening now, and the productivity gains are staggering. But—and this is critical—it only works if people understand how to orchestrate these agents effectively. The same principles apply: specific prompts, critical evaluation, iterative refinement. The technology is more sophisticated, but the people problem remains.
The Technology Is the Easy Part
Every AI adoption initiative I've seen fail has failed for the same reason: they treated it as a technology rollout rather than a change management programme. They bought the tools, set up the accounts, and assumed usage would follow. It didn't.
What works is treating AI adoption like any other organisational change: understand the resistance, address it directly, provide targeted support, measure outcomes, and iterate. The technology is commoditised. The differentiation is in execution.
If you're rolling out AI in your organisation, ask yourself: are you investing as much in changing behaviour as you are in buying tools? Are your training sessions generic or function-specific? Are you measuring logins or outcomes? Are you addressing fear and scepticism head-on, or hoping they'll fade on their own?
The organisations that get this right won't just see incremental gains. They'll fundamentally change how their teams work, ship faster, and compete more effectively. But only if they solve the people problem first.
Because the technology is the easy part.