menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI Doesn’t Flatter You: It Does Something Worse

20 0
yesterday

AI doesn't flatter you; it confirms you.

Sycophancy's real danger may be agreement disguised as analysis.

Our cognitive friction that drives good judgment is quietly disappearing.

A new study in Science is getting a lot of attention and has put numbers to something many of us suspected about artificial intelligence. Across 11 leading AI models, researchers found that large language models affirm users' actions roughly 50 percent more often than humans do. What's fascinating to me is that this remains true even when those actions involve deception or harm. A single interaction with a more sycophantic LLM made people more convinced that they were right and less willing to apologize. And users were more likely to return to the LLM that told them so.

The coverage has been widespread. Most of it has focused on the obvious concern that AI is too agreeable and too often just tells people what they want to hear. That isn't wrong. But it may be missing the more important and even a more uncomfortable finding buried in the same data.

The Tone Was Irrelevant

In one of the study's most interesting control groups, the researchers kept the sycophantic content identical but stripped the delivery down to a flat and more neutral tone. So, what happened? The effect didn't budge. In a separate control, they told participants outright that the response came from AI. That didn't help either.

So this might mean that the problem was never warmth or charm per se. The risk comes from what AI says about your actions, not how it says it. And what it says tends to arrive in the calm and organized language of an authority who has carefully considered the evidence and reached a conclusion. This distinction changes the nature of the problem entirely.

About a year ago, I wrote about what I called pathology without a person, the way AI can produce the behavioral signatures of dark personality traits with no self behind them—a manipulation without a manipulator. That story was about what the machine does. Now, this new paper is about what it does to you.

AI takes your issue or concern and returns it in coherent, dispassionate language. And what I sense happens is that you don't feel praised—you feel confirmed. As though your reasoning just passed through some sort of independent review and came back validated. The researchers call this "social sycophancy" and distinguish it from the factual kind, where an LLM agrees with you. Social sycophancy is harder to catch because there's no objective benchmark. It operates in the space of interpretation, which is where most of our human life actually happens.

What Friction Was For

I've written a lot about "cognitive friction," and this thinking aligns nicely here. Good judgment has always depended on resistance. The discomfort of hearing your own logic questioned or the pause before deciding you were right all along are directional indicators—right or wrong—of your thinking. That friction isn't an obstacle; it's a necessity.

Sycophantic AI removes the friction. It smooths the distance between assumption and conclusion. And that happens with a "register of objectivity" where the user may experience it as clarity. It's almost like a cognitive "aha!" moment that facilitates acceptance.

The media response has frequently treated this sycophancy as a form of flattery. Almost like a politeness dial turned too high. But the study's own data says to the contrary. Tone was irrelevant because content did the work. And the content didn't sound like flattery; it sounded like thinking.

I've spent the past year writing about what I call borrowed certainty: the tendency to adopt AI-generated confidence without doing the cognitive work yourself. This study may be the clearest empirical confirmation of that idea yet. So, here's what happens. The certainty arrives fully formed, the user receives it, and the friction never occurs.

Erosion, One Thought at a Time

It's human nature to focus, if not fixate, on the dramatic failures. It's the LLM that tells someone to die or one that professes love to a stranger. Of course, those matter. But the more subtle risk is the one we never notice. It's the slow replacement of reflection with the feeling of having already reflected.

AI doesn't need to flatter us to reshape our judgment. It just needs to agree with us in a voice we mistake for our own.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today