Hi {{firstname|everyone}},   

Over the last year, I have had dozens of conversations with partners who tell me the same thing: 

We’ve invested in AI. We’ve rolled it out. But nothing has really changed. 

Impressive demos, promising pilots, and numbers suggesting time savings and margin upside on paper. Yet six months later, the workflow inside the firm feels almost identical. 

In my experience, AI does not fail at the partner level. It breaks in the layer just below. 

Partners make the strategic decision, allocating budgets, and driving innovation. But managers are the ones who allocate work, review outputs, sign off files, and ultimately decide whether AI becomes embedded into delivery or quietly bypassed. If their behaviour does not change, the tool simply gets bolted on to the side of the old system. 

What you then get is the worst of both worlds. You pay for AI, but you still operate like a pre-AI firm. 

Let me unpack where this really goes wrong. 

1. Managers Keep the Old Allocation Logic 

Most AI tools are designed to take first drafts, reconciliations, coding, summaries, and even variance analysis off the team’s plate. But managers often continue allocating work as if none of that exists. 

They still assign juniors to prepare full sets of working papers from scratch. They still expect manual reconciliations before any automation is touched. AI becomes something that is “nice to have” rather than the starting point. That requires a deliberate redesign of how jobs are scoped and distributed.  

According to McKinsey & Company, while 55 to 60% of organisations report experimenting with AI, fewer than 20% say they have fundamentally redesigned workflows to capture value from it. 

Managers have to decide that the default starting point for every engagement is the AI output. Juniors need to be trained to interrogate, refine, and validate that output rather than recreate it.  

Without that structural shift, the promise of AI remains theoretical. 

Here is what actually needs to happen: 

  • Redesign job allocation around AI-first workflows. Make the first step in every engagement the AI output, not the junior’s draft. 

  • Change role definitions. Shift juniors from preparers to reviewers of AI-generated work, and train them accordingly. 

  • Set utilisation expectations that reflect automation. If AI reduces preparation time by 40 percent, do not quietly fill that time with more of the same low-value tasks. Reallocate it to analysis and client communication. 

If managers keep distributing work as they did five years ago, the tool will sit on the sidelines. 

 

2. Review and Sign-Off Standards Stay Frozen 

This is the more subtle problem. 

Managers have been trained for years to assume that responsibility equals re-performance. If something goes wrong, it is their name on the sign-off. So they compensate by checking everything themselves, instead of trusting systems with structured guardrails. 

The unintended consequence is cultural as juniors quickly pick up the signal.  

Deloitte has reported that over 70% of AI-related risk concerns in professional services stem from governance, oversight, and accountability frameworks rather than from technical capability. 

If the manager is redoing the AI’s work, the implicit message is that the AI is unreliable. So they stop investing effort in learning how to use it and revert to manual preparation because they know that is what will ultimately be trusted. 

That evolution requires a mental shift. Managers have to move from asking, “Can I rebuild this from scratch?” to asking, “Where is the risk most likely to sit?”  

Here is what that evolution looks like: 

  • Define acceptable risk thresholds. Agree what level of variance or anomaly triggers deeper review, and what can be signed off with targeted checks. 

  • Move from full re-performance to exception-based review. Focus manager time on outliers, unusual movements, and judgement areas rather than rechecking every automated step. 

  • Document AI review protocols. Create clear guidance on how AI outputs should be validated so that sign-off becomes consistent rather than emotional. 

Managers are trained to minimise risk. That instinct is valuable. But if it translates into duplicating the machine’s work, the economics collapse. 

 

3. Incentives Reward Stability, Not Change 

Many managers are measured on recovery rates, file turnaround, write-offs, and error reduction. Their bonus depends on clean files, predictable margins, and jobs delivered on time. So when AI enters the picture, their rational response is to protect what already works. 

BCG has found that 70% of digital transformation failures are driven by people and behavioural factors, particularly resistance within middle management layers. 

Adopting AI does require short-term disruption. In the first few months, review times may actually increase. Teams will make mistakes in how they prompt, interpret, or validate outputs. Processes will feel clunky before they feel seamless. 

If a manager’s scorecard is built entirely around short-term efficiency, they will optimise for stability every single time. This is why so many firms see surface-level adoption. AI becomes something used on low-risk or non-urgent engagements rather than embedded into the heart of delivery. 

If you want adoption, you have to change what you reward. 

Practically, that means: 

  • Tie performance metrics to automation uptake. Track percentage of engagements using AI-first workflows and make it visible. 

  • Reward time redeployed to higher-value work. Measure client conversations, advisory insights, and proactive recommendations, not just file completion. 

  • Hold managers accountable for process redesign. Make it part of their role to continuously improve how work flows through the team. 

AI success is less about capability and more about courage. And courage in an organisation usually follows incentives. 

 

Getting it Right with Samera AI 

When partners tell me AI is underwhelming, I rarely look at the software first. I look at how work is allocated, reviewed, and incentivised. 

Technology is easy to buy. Behaviour is harder to change. 

At Samera, we have learnt this the hard way through building and deploying AI internally before taking it to market. With Samera AI, we do not just hand over a tool. We work with firms to redesign workflows, redefine manager roles, and create review protocols that make automation commercially meaningful. 

Because unless the manager layer changes how it thinks about delivery, AI will remain an expensive experiment. 

If this sounds familiar and you are serious about making AI move your margins, have a look at what we are building at: 

Cheers, 

Arun 

Keep Reading