Faster Output, Weaker Judgement
The AI-enabled trade-off consultants can't afford to make
The consulting industry is distracted by the wrong debate. The immediate risk for most firms is not that AI will replace consultants. It’s that AI replaces the thinking while leaving the consultant in place.
That distinction matters more than it might first appear. A consultant displaced by AI knows they have been displaced. A consultant who has stopped thinking, but remains in post and continues producing work, is a much more dangerous situation: for the firm, for the client, and for the individual’s own development.
The workflow that should be non-negotiable
Colin Mann (MD at Honeycomb Consulting Skills Training), hosted a webinar recently with Adrien Foucault, founder of Spaik and former McKinsey consultant, on how AI is reshaping consulting (you can watch it here).
Adrien described the necessary AI workflow for consultants in simple terms: prompt, validate, write up. You use AI to generate a first pass. You then apply judgment to what it produces, checking it against primary sources, testing the logic, catching errors. Then you write up and present.
It is obvious when you say it. It is clearly right. And it is not consistently what is happening.
Adrien described seeing - in his advisory work across consulting, private equity, and investment banking - the validate step being compressed or cut entirely.
The sequence in practice is becoming: prompt, write up.
Why the validate step is missed
Some of this is pace pressure. Consulting has always operated at pace, and AI compounds this rather than relieving it. When a client expects faster turnaround because AI can produce faster outputs, the temptation is to deliver it. The consultant who generates in twenty minutes what previously took two hours does not spend the remaining time on careful validation. They move to the next deliverable.
Some of it is the nature of the technology itself. Large language models generate outputs with a fluency and apparent authority that makes them read as correct even when they are not. Every major model carries a disclaimer that it can and does make mistakes. The problem is that AI-generated content does not read like mistakes. It reads like polished, confident work. That is a specific cognitive trap, and it is a harder one to avoid than it sounds.
I’m reminded of some advice I got years ago from Richard Webster, a Bain Partner I worked closely with. We were talking about writing a good ‘answer-first’ (Bain jargon for a strong hypothesis, which we used to guide the work we did for clients). He cautioned that when you get very good at writing an answer-first, you can start to believe it, and when you believe it, you can fall into the trap of not actually doing the work as well as you should to test it. AI tools are giving everyone a near-instant and convincing answer-first.
And some of it is a culture problem. If a firm has not built explicit expectations around validation, then cutting the validate step is not experienced as a failure of rigour. It is experienced as efficiency.
The costs are bigger than they first seem
The consequences are becoming visible. Deloitte Australia had to refund clients on projects where AI-generated content contained incorrect quotes, incorrect names, and incorrect legal text1. This is not a marginal or theoretical risk. It is a reputational and commercial one with real financial consequences, and it is a direct result of AI-generated output moving to client without sufficient review in between.
A recently published research study, AI makes you smarter but none the wiser (Fernandes et al., 2026)2, puts hard numbers on the mechanism. In two large-scale studies, participants using ChatGPT to complete logical reasoning tasks performed better than those working unaided. But they also significantly overestimated how well they had done, by around four points on a twenty-question test. AI improved their output and impaired their ability to judge it, simultaneously.
There is a detail in the findings that consulting leaders should note. Participants with higher technical knowledge of AI were more confident in their self-assessments, and less accurate. More AI-savvy did not mean better or more self-aware. The people in your firm who are most capable with the tools may also be the least likely to catch their own errors. Training your team to use AI well is necessary. It is not, on its own, sufficient.
Then, beyond immediate error risk, there is a slower and more structural cost that concerns me. Junior consultants develop judgment by doing analysis, not by supervising AI doing it. The research, the synthesis, the model-building: these are not just tasks to be completed. They are the mechanism by which people learn to think like consultants. When those tasks are routinely delegated to AI without meaningful engagement with the output, the development pipeline is going to produce people who can operate the tools proficiently but who have not built the expertise, judgement, and critical thinking capacity that consulting rests on.
What does that mean for your firm in five, ten, fifteen years’ time? The quality problem will not stay at the junior level.
The conversation firms are not yet having
Most of the current debate about AI in consulting centres on adoption: which tools to use, which workflows to automate, whether to build or buy, how to manage the change. These are important questions. But the quality conversation is lagging behind the adoption conversation, and that gap has consequences.
What does the QA process look like in an AI-augmented team? Who is accountable for validating AI-generated outputs, and how is that accountability made explicit rather than assumed? What does it mean for a senior consultant or partner to sign off on work when the methodology behind it is less visible than it used to be?
These are not questions for the future. They are operational questions for now. The firms that are answering them clearly are building a meaningful and durable advantage over the ones treating adoption rate as the primary metric of progress.
What a real commitment looks like
In the webinar, Adrien described ‘leadership commitment’ as one of the key unlocks for AI adoption, but not in the way firms usually frame it. The commitment he described is not, “we encourage you to use AI.” It is, “this is how I use AI in my own work, and here is what I expect from our outputs as a result.”
The second version is significantly harder than the first. It requires senior team members to have developed a view on what rigorous validation looks like in their context, and to model it consistently, not just mandate it from a distance.
That is the conversation worth having. Not whether to adopt AI, but what quality means in an AI-augmented firm, and how you protect it as pace and volume increase.
This will be an interesting time to look back on. The race is underway, and I’d wager the winners will be those who approach it like the tortoise, not the hare. A firm that produces more, faster, while eroding the rigour that made its work worth commissioning, is making a very poor trade.
Join our AI Masterclasses in May
Honeycomb and Spaik are collaborating to run two open AI Masterclasses in central London this May.
Thursday 7th May — AI Applications in Consulting
A hands-on day for junior to mid-level consultants focused on practical AI skills: how to use AI effectively and responsibly across everyday consulting workflows.
Friday 8th May — Creating Value with AI in Consulting
A strategic session for senior consultants and leaders on business value, implementation, and what AI means for the future of consulting.
FIND OUT MORE AND BOOK YOUR PLACE HERE
Useful resources
The Consulting People Report 2026, by Honeycomb Consulting Skills Training and New Minds. A people playbook for consulting leaders which includes 5 concrete moves you can deploy now to emerge from 2026 with clarity, momentum, and a consulting team fit for the future. DOWNLOAD HERE.
The State of AI in Consulting 2025, by Spaik. Explores how leading firms are navigating AI’s twin impact: generating 20-40% revenue from AI projects while transforming internal operations. DOWNLOAD HERE.
Thank you for reading The Skilled Consultant. If you haven’t yet subscribed, please do so to receive all our articles direct to your inbox.
There are several other ways you can interact with Honeycomb Consulting Skills Training….
Connect with Deri Hughes (Founder & MD) on LinkedIn
Connect with Colin Mann (MD) on LinkedIn
Book a 30 minute intro call with Deri Hughes
Book a 30 minute intro call with Colin Mann
Stay informed about our free workshops and webinars - follow Honeycomb on LinkedIn or visit our website.



