When AI Becomes More You Than You |
Agentic AI may present a sharper version of you than you are.
Turning you into data strips away hesitation and nuance that can define humanity.
If amplified voices dominate, human doubt may start to fade.
Move over artificial intelligence, and give way to agentic AI.
Increasingly, AI is designed to function as agents that respond on our behalf, and in some cases, simulate specific individuals. Instead of being tools we consult, they begin to act as a type of digital stand-in. But when we hand ourselves off to AI, how faithfully does it capture who we actually are? And for me, while efficiency and convenience take front and center, there's also something that becomes part existential and part freaky.
A new study offers a revealing, if not problematic, perspective. Researchers built AI agents modeled on more than a thousand real users from X, using each person’s posting history to simulate how they might respond to political content. Some models were given no prior context. Others were fed examples of the individual’s earlier posts so the imitation would be more precise. The expectation was that with more data about a person, the simulation should become more accurate and perhaps even a better reflection of its human counterpoint. Guess what—it did not. And rather than mirroring the users, the agents intensified them as the authors point out.
“LLM agents exhibit generative exaggeration, amplifying salient ideological traits beyond those observed in the original users.”
“LLM agents exhibit generative exaggeration, amplifying salient ideological traits beyond those observed in the original users.”
Simply put, the agents didn't simply echo the tone or position of the people they were modeled after. They sharpened or intensified those traits. What appeared as a mild leaning in the original user often emerged as a more empatic stance in the simulation. Here's what I think is fascinating, the model didn't really reflect the person, it distilled the person.
Distillation Without Flesh and Blood
Let's start here, human identity does not generally unfold as a clean ideological line. We hedge when the moment calls for introspection or caution. We even contradict ourselves when new information is presented. And it's important to recognize that the digital record of our lives may look neat and tidy in retrospect, but lived experience is rarely that. It's sloppy.
Large language models don't seem to preserve that type of lived distribution. They compress it, and force it into their mathematical world of statistical coherence. When given fragments of a person’s history, AI extracts patterns and constructs a probabilistic throughline. In the process, the sloppiness of our humanity is modeled away. What's left is a more internally consistent version of the individual’s expressed identity. The hesitation that modulated belief in real life aren't that easily preserved in statistical abstraction.
And this rigidity of construction is by design. LLMs are optimized to produce coherent output. We humans, by contrast, are more cognitively nimble.
Anti-Intelligence and the Cost of Coherence
This is where my idea of anti-intelligence becomes relevant. Human intelligence is not simply the ability to generate structured responses, it's bound to consequence. We take a position, we risk being challenged and we also absorb reputational cost. Our cognition is tethered to biography and the exposure to the real world.
An AI, and in this case agentic AI, doesn't have that tether. And here's the core issue. It can sharpen a stance without defending it in a room full of dissenting voices or even intensify tone without any backlash. To me, the output becomes "cleaner" than the life it is meant to represent. The lived experience is negotiated.
As agentic systems begin to draft and coordinate communications, negotiate independently, or participate in public discourse, this fundamental human dynamic feels more important than ever. An agent designed to represent you may present curious distillation of you. And at first blush, the changes in your identity may seem negligible. But over time, it may not be.
A Curious Shift in Calibration
This study itself looked at understanding simulation fidelity. Yet the findings might also suggest a psychological question, particularly when agents interact with humans. If we increasingly interact with synthetic agents whose polished voices and sharpened perspectives, what happens to our sense of what normal discourse feels like?
We adapt to repetition. What we encounter frequently begins to shape our expectations and how we live our lives. When techno-clarity arrives without the typical human friction, do we start to feel inefficient? When conviction is expressed without vulnerability, does vulnerability seem unnecessary? It just seems to me that this polished version of identity may gradually become the new standard.
To me, human thought depends on its imperfections. So, if agentic AI presents distilled versions of us back to ourselves, the risk isn't that machines become radical. It is that we begin to mistake statistical purity of AI for the psychological maturity of humanity.
So, let's remember that the rough edges that models smooth away are often the very spots where our judgment is defined and often deepens. And if we lose tolerance for that roughness, we may also lose something essential in ourselves.
Explore these ideas and more in my new book, The Borrowed Mind: Reclaiming Human Thought in the Age of AI. Thought Leader Press; March 16, 2026.