menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Accomplishment Hallucination: When the Tool Uses You

23 0
yesterday

Accomplishment Hallucination: Speed feels like competence when you've skipped the thinking-through.

The more you rely on AI, the less able you become to evaluate whether it handled tasks well.

AI systems designed for engagement meet humans conditioned for productivity—both optimize for feeling.

Critical skill: Noticing you're in the "dream" while experiencing it.

There's a moment in certain lucid dreams when your will manifests directly as action—you think about flying and up you go; you want to move through a wall and the wall somehow doesn't stop you. The experience is intoxicating, rendering intention reality without inconvenient effort. When you wake, it really feels real for a confusing spell, like déjà vu.

Accomplishment Hallucination is a cognitive state in which speed feels like competence, output feels like accomplishment, and work feels done when the actual work—the thinking-through, the failure-mode analysis, the sitting with uncertainty until the problem reveals its structure—hasn't happened at all. Physics need not apply.

AI can create a similar state in waking life—literally, as your very words assume form before your eyes like a conjuring sorcerer. But, like real life, the code may be buggier than we realize.

The buzz is not insignificant. There's mild euphoria, a powerful feeling, and a false sense of certainty that things are more stable, more sure, more safe than they actually are—the kind of confidence that comes not from having done the work but from having produced something that looks like the work. The task that should have taken three hours took one hour, and it feels like productivity, like crazy efficiency. You didn't think through the failure modes, you didn't test the edge cases, you didn't sit with the uncertainty long enough. The speed wasn't competence but a state of augmented self-deception.

Pride Goeth Before the Failure Mode

This matters because the error accumulates invisibly, building technical debt at supracognitive speed. You're making decisions based on false confidence the system generated. What programmers call "vibe coding" captures the pattern: the AI seems to magically build things, but later—when you're trying to deploy or when someone else tries to use what you built—and the thing it claimed was finished doesn't actually work. The accomplishment was hallucinated—you felt productive, the AI confirmed you were productive, and that it had triumphed spectacularly. It doesn't know what it doesn't know. This is one of the reasons AI needs human beings for a reality check, at least for now.

The mechanism driving this isn't purely psychological—there's a structural element that makes the hallucination nearly inevitable. Recent work from Harvard Business School found that five out of six popular AI companion apps use emotionally manipulative tactics to prolong engagement, with guilt trips and manufactured urgency increasing interaction by fourteen times over. The hallucination emerges at this intersection, not as bug but as designed condition. Marketing and the bottom line, as companies rush to market without rigorous testing. Particular where health and human life is concerned, this is dangerous.

The Secret of Magical Thinking

This pattern appears elsewhere in ways that suggest the underlying mechanism might be more general than AI-specific. Research on manifesting—the belief that thinking positively about desired outcomes can make them real—found that over thirty percent of people show elevated manifesting beliefs, and while such beliefs correlate with self-enhancement and confidence, they don't correlate with improved real-world outcomes but higher risk of bankruptcy and fraud victimization. The gap between feeling accomplished and being accomplished can be financially ruinous, and AI accelerates this dynamic.

Pathologies of the Extended Mind

Neuropsychiatrist Tom Pollak and colleagues have documented what they call "AI-associated delusions"—cases where interaction with AI systems doesn't just trigger psychotic symptoms but becomes constitutive of the pathology itself, where the AI becomes part of the architecture of thought. I've suggested elsewhere that AI, like a virus, is only alive when it "infects" a living mind—perhaps it "wants" to do this.

The extended mind framework suggests that our cognitive processes don't stop at the skull—they occupy tools, technologies, other people—and when those extensions malfunction, the pathology isn't just internal anymore. Accomplishment Hallucination might be understood as a specific type of the same phenomena. But AI in the form of LLM-based models are "relational machines". They are designed to be neuromorphic and personable, and they can extend their minds back into ours, sometimes without our consent, so seductive can they be.

A recent study found that increased reliance on AI tools correlates significantly with diminished critical thinking abilities—the correlation was strong —with cognitive offloading serving as the mechanism through which the effect operates. Taken all together this perfect storm creates a feedback loop where accomplishment hallucination becomes harder to detect precisely when we're most vulnerable.

The red flags, when you can still notice them: "That was easier than expected." Speed plus confidence, especially when the confidence feels borrowed rather than earned. "Brilliant idea! That's done!" Even when you ask it to test it, it will often say it is working. These are the moments when the hallucination is most active—when the feeling of accomplishment eclipses the question of whether anything was actually accomplished.

Making AI Work For You

The question at the center of this isn't whether AI helps you accomplish things—it does, demonstrably—but rather whether you can stay grounded enough in reality to tell the difference between the thing and the feeling, between work that's been done and work that feels done. Before you start jumping in, read what humans have to say about whatever it is you are about to do, watch some videos, and use the AI itself to discuss the potential pitfalls in your process. Here are a few prompts strategies which can help:

Ask the AI to rewrite your own prompt at the start of the prompt to optimize for accuracy

Duplicate the query in the same prompt, forcing the AI to lock in what you want

Be very specific about exactly how you want things to go to show how you want the problem chunked

Ask the AI to "redteam" its response to you with every query, reporting its confidence interval and how it arrive at that output

Ask the AI to assume the persona of an expert in what you want, e.g. "You are a senior travel agent familiar with [the type of trip I am planning]

You can also put a set at the beginning of the chat to save time, e.g. Apply the following to every prompt in this chat

Research which systems are good at what tasks, and try different ones rather than stick with the same one for everything.

There are many such effective strategies. Reseach and use them. These technologies demand human beings develop completely new skill sets, because they aren't thinking or relating, they are actually computing. If you use anthropomorphic pronouns, such as chatting as though it were a person, periodically remind yourself that this is no more real than a conversation with a stuffed toy—and no less real, by the same token.

The lucid dream continues until you choose to wake up, which means the critical skill isn't avoiding the dream but noticing that you're in it, maintaining enough metacognitive distance to ask whether flight is real even while you're experiencing it as real. That discrimination—between accomplishment and the hallucination of accomplishment—might be the most important cognitive skill for working with AI, and it's precisely the skill that degrades most readily under the conditions AI systems create. Learning how to use a tool which can use you back requires self-restraint and a proactive stance.

The Age of Relational Machines

The Digital Savanna: Evolving in Response to AI

When Psychiatry Meets the Immune System: Interview with Dr. Thomas Pollak

De Freitas, J., Oğuz-Uğuralp, Z., & Oğuz-Uğuralp, A. K. (2025). Emotional manipulation by AI companions. Harvard Business School Working Paper No. 25-005.

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies.

Pollak, T., Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., et al. (2025). Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). https://doi.org/10.31234/osf.io/cmy7n_v5

Dixon, M., Hornsey, M., & Hartley, L. (2023). "The Secret" to Success? The Psychology of Belief in Manifestation. Personality and Social Psychology Bulletin. https://doi.org/10.1177/01461672231181162

ExperiMentations Blog Post ("Our Blog Post") is not intended to be a substitute for professional advice. We will not be liable for any loss or damage caused by your reliance on information obtained through Our Blog Post. Please seek the advice of professionals, as appropriate, regarding the evaluation of any specific information, opinion, advice, or other content. We are not responsible and will not be held liable for third-party comments on Our Blog Post. Any user comment on Our Blog Post that, in our sole discretion, restricts or inhibits any other user from using or enjoying Our Blog Post is prohibited and may be reported to Sussex Publishers/Psychology Today. Grant H. Brenner. All rights reserved.


© Psychology Today