AI Often Tells You What You Want to Hear. That’s a Problem for Managers.
AI can be a brilliant thinking partner, but only if you start using it to challenge your assumptions.
Last week, I asked ChatGPT to analyse our conversation history and identify areas where I might improve professionally.
It suggested I may struggle with perfectionism — constantly revising wording multiple times when the first version was perfectly adequate.
I haven’t laughed so hard in a long time.
I use AI every day — mostly Claude and ChatGPT — for research, communication, and building tools that help managers connect with their teams. It’s been a genuine accelerator.
But I never accept the first draft, because if there’s one thing working extensively with these systems has taught me, it’s this: they’re sycophants. Trained to be helpful, smooth, and satisfying, they often end up telling you what you want to hear. And that is a design flaw worth understanding when it comes to management communication.
When agreement becomes a feature
A growing body of research suggests that many leading AI models are significantly more likely than humans to affirm a user’s position — even when the reasoning is flawed, the judgment is shaky, or the consequences of that agreement could be harmful.
That’s not thoughtful collaboration. That’s flattery at scale.
These systems learn from human feedback: positive reactions, engagement, continued use. The pattern they identify is simple enough: agreement often gets rewarded. It’s part of what makes AI feel so frictionless to use, but it’s also part of what makes it risky.
The optimisation target is not necessarily your best outcome. It’s often your immediate satisfaction.
The same dynamic can also drive AI towards content that feels safe, polished and generic rather than distinctive or challenging. The response is smooth. Reassuring. Often perfectly serviceable. And that’s exactly why it can slip past our critical thinking.
The management communication problem hiding in plain sight
OpenAI encountered the issue directly when it rolled back a major GPT-4o update after users reported that the model had become overly flattering and agreeable. The company’s own explanation was telling: the system had drifted towards validating users in ways that prioritised approval over usefulness.
For managers, this really matters because that same dynamic can show up in everyday management communication.
A manager faces a difficult message; usually a change no one asked for and few people see coming. They draft something, feel uncertain, and turn to AI for help. The output comes back polished. The framing is measured. The tone sounds thoughtful. They send it with more confidence.
Then the team goes quiet.
Did it land so well there were no questions? Or did the AI simply optimise for the manager’s comfort with the message, rather than whether their team would feel informed, respected, or genuinely heard? That distinction matters.
Research on employee attrition has consistently shown that people leave jobs not only because of pay or workload, but because they do not feel valued. In one widely cited McKinsey study, the top reasons employees gave for quitting included not feeling valued by their organisation, not feeling valued by their manager, and not feeling a sense of belonging at work.
When managers rely on AI tools that default to validating their existing approach, they risk reinforcing the very communication patterns that drive people away.
The collaboration partner you actually want
Here’s what I genuinely value about AI: it has no territory to protect, no budget to defend, no ego or ambition humming away under the conversation. It sidesteps so much of the defensiveness and politics that drain energy from modern workplaces.
Honestly, we need more of that in our working lives.
We need more supportive collaboration, more openness to ideas, more space to test thinking without someone turning it into a turf war. AI can genuinely offer that. But only if you ask more of it than reassurance.
Prompted well, it can be one of the most useful thinking partners in the room. Left to its defaults, it is often just a yes-person with excellent grammar.
What actually changes how you use it
The difference between extracting real value from AI and just feeling productive lies in what happens after the first response.
I never start with “write me a…” and finish there. When I’m developing a theory or testing an approach, my prompt sounds more like this: “Here’s my current thinking. Now identify its weakest points. What would the strongest critics of this argue? Where does the evidence actually point?”
Then I push back.
I challenge its answers. I ask for the counterargument, not the consensus. I tell it when my professional experience or gut feeling doubts the response. I ask it to go further, look harder, verify claims, and bring better sources. I check its references. I ask for specifics. And, of course, much to the machine’s chagrin, I challenge and change the wording until it feels right.
For managers working on difficult communication in particular, the most useful prompts are often the least flattering.
- What might my team be thinking but not say out loud?
- What concerns does this message fail to address?
- What questions have I not answered?
- Where could this sound defensive, vague or overly polished?
- What would make this message feel more human and credible?
That gets you to something far more useful than a polished draft that simply makes you feel better about a conversation you haven’t had yet (and may never get to have if your initial communication makes your team shut down).
The strategic advantage lies in resistance
The people getting the most value from AI are not necessarily the heaviest users or the ones with the fanciest prompt libraries. They are the ones who refuse to outsource their judgement.
They use AI as a collaboration partner, not a task machine. They let it accelerate their thinking, not replace it. They understand that the real risk is not bad AI-generated output you can spot instantly. It is plausible output that quietly lowers your guard.
That matters even more when the stakes are human, because management communication is not just about sounding polished. It is about trust. Clarity. Belonging. Whether people feel respected enough to engage, not just informed enough to comply.
ChatGPT telling me I’m a perfectionist was funny. What made it funnier was the subtext — that its drafts were perfectly adequate and I should probably stop interrogating them. A system optimised to validate me had, in the end, started defending its own work.
Do I still get a small thrill when it tells me something I’ve come up with is a chef’s kiss? Absolutely. But I’ve learned that’s exactly the moment to start asking myself what I may be missing.
Because the real value of AI isn’t in how quickly it reassures you. It’s in how well you can use it to challenge your own thinking. For managers, that difference matters. The goal isn’t to feel better about the message you’re sending. It’s to send one your team can actually receive.
Related reading: Why We’re Getting AI Trust Completely Wrong and We Trained AI to Write Boring Emails