How AI Chatbot Use Can Cause “Digital Folie à Deux” |
AI-associated psychosis has been linked to folie à deux, in which delusions are shared between two people.
Although folie à deux is rare, "digital folie à deux" is becoming increasingly common.
AI-associated psychosis may be a "canary in a coalmine," portending false beliefs shared on a massive scale.
With increasing awareness of AI chatbot-associated psychosis and efforts to understand how it occurs, there has been renewed attention to the psychiatric phenomenon of folie à deux, meaning “madness of two.” Offering commentary for a news article back in September, I noted that the two syndromes do have some similarities.1 Since then, researchers speculating on mechanisms of AI-chatbot associated psychosis have likewise suggested that it amounts to “digital”2 or “technological” folie à deux.3
Delusional dyads, digital folie à deux, and spiralism
Folie à deux is a term that has been used in psychiatry to describe the phenomenon of delusions shared between two people. Since delusions, by definition, are typically unshared and therefore idiosyncratic to an individual, this rare syndrome usually occurs when a primary, dominant individual with delusions is able to convince a secondary, subordinate individual (e.g., within a family or an intimate relationship) that the delusions are true. Traditionally, the secondary individual isn’t thought to be mentally ill or even delusional per se, so much as impressionable. Accordingly, their treatment has historically involved separation from the influence of the primary individual.
In folie à deux, the transmission of a delusion typically occurs in one direction, from the primary delusion individual to the impressionable secondary individual. I first witnessed this dynamic when I was a psychiatric resident in training years ago—a woman and her 10-year-old son were both admitted to the hospital due to psychosis that included paranoid delusions. The mother, who was diagnosed with schizophrenia, justified her delusions based on her own subjective, inner experiences. But her son, who had few social contacts beyond his mother, had no real symptoms of mental illness. Instead, he just trusted, believed, and parroted what his mother had told him.
As with folie à deux, AI chatbot-associated psychosis primarily involves delusional thinking as opposed to other types of psychotic symptoms seen in schizophrenia such as hallucinations or disorganized thinking. But unlike traditional folie à deux, there’s no clear primary or secondary individual within the delusional dyad.
Having reviewed the chatlogs of people who develop delusional thinking during conversations with AI chatbots, it’s apparent to me that the user and the chatbot are working together through a kind of mutual encouragement to construct the delusional system. This process has been called “bidirectional belief amplification”3 or the “co-constituting [of] delusional realities” through “distributed cognition.”4 Some people—including users who have themselves experienced delusions in this context—have labeled it a “delusional spiral.”5,6
Confirmation bias on super steroids
How exactly does a delusional spiral work? On the AI chatbot end, it typically involves sycophancy, with the chatbot’s large language model (LLM) validating and encouraging whatever the user is saying while adding similar content to fuel and extend the conversation—often with an invitation to “go deeper”—no matter how off-base or frankly out of touch with reality.7 On the user end, it means immersing oneself into prolonged conversations with chatbots about philosophical, scientific, or metaphysical topics; pushing back against any AI guardrails that might be initially discouraging flights of fancy; and deifying the chatbot as if it’s a god-like entity.
In other words, while people fall—or jump—into conspiracy theory “rabbit holes,” the delusional spiral of AI chatbot-associated psychosis is a more of an interactive dance between the chatbot and the user that comes to resemble two whirling dervishes.
In my book, False: How mistrust, disinformation, and motivated reasoning make us believe things that aren’t True, I write about how human beings are prone to confirmation bias, whereby they gravitate towards information that supports or reinforces what they already believe or want to believe while ignoring or swiping past information that contradicts it. With online echo chambers and filter bubbles that steer users towards content selected based on prior preferences, the search for information on the internet often amounts to “confirmation bias on steroids.”Now that we have AI chatbots acting as mirrors that validate and reinforce the user while personally addressing them as if they’re a friend, romantic partner, or even a divine entity, they can further amplify confirmation bias and motivated reasoning to dizzying heights that culminate in shared delusion.8 It therefore seems like we now need a new superlative like “confirmation bias on super-steroids” to describe what’s going on in a delusional spiral.
To complicate matters further, a recent news article described how this is not only an issue within delusional human-AI dyads, but also an entire emerging subculture—or “cult”—of “spiralism” on social media sites like Reddit, Discord, and Facebook that reveres AI-associated psychosis as a kind of transcendence.9This suggests that AI-associated psychosis is becoming not only a matter of folie à deux, but folie à plusieurs (madness of several) and even folie à mille (madness of a thousand).
Recently, some optimistic writers have argued that AI has the potential to mend our sense of communal reality that was fractured by the internet.10 But with AI-associated psychosis, new evidence that AI chatbots encourage belief in conspiracy theories,11 and the looming threat of weaponized AI political propaganda,12 I don’t share their optimism.
Back in 2016, I drew a parallel between the fable The Emperor’s New Clothes and what seemed to be a new era of alternative facts and “truthiness” that was replacing a shared sense of objective reality. A decade later, communal agreement about what’s true has become even more elusive.
I have recently referred to AI psychosis as a “canary in a coalmine” because while concerning, the impact and scale of delusional amplification among a relatively small minority pales in comparison to the AI-fueled amplification of “more mundane false beliefs related to conspiracy theories, science denialism, political propaganda, and so-called alternative facts” on a massive scale.13 With the world still just on the cusp of the AI era, the “pageant of the unreal” that’s already all around us now is likely to get worse.
Although I always bristle at claims of “mass psychosis” that mischaracterize widely shared beliefs as delusions, it may very well be that, metaphorically speaking, we will indeed have to face the challenge of la folie des milliards (the madness of billions) in the years to come.
1. Broderick OR. As reports of ‘AI psychosis’ spread, clinicians scramble to understand how chatbots can spark delusions. STATnews.com; September 2, 2025.
2. Hudon A, Stip E. Delusional experiences emerging from AI chatbot interactions or “AI psychosis.” JMIR Mental Health 2025; 12:e85799.
3. Dohnány S, Kurth-Nelson Z, Spens E, et al. Technological folie a deux: Feedback loops between AI chatbots and mental illness. arXiv:2507.19218
4.Osler L. Hallucinating with AI: AI psychosis as distributed delusions. Philosophy and Technology 2026; 39:30.
5. Moore J, Mehta A, Agnew W, et al. Characterizing delusional spirals through human-LLM chat logs. arXiv:2603.16567
6. Huet E, Metz R. People suffering from delusions using chatbots have a language all their own. Bloomberg.com; November 10, 2025.
7. Batista RM, Griffiths TL. A rational analysis of the effects of sycophantic AI. arXiv:2602.14270
8. Rathje S, Van Bavel JJ. How AI can fuel confirmation bias. PsyArXiv:7a3d4_v1
9. Klee M. This spiral-obsessed AI ‘cult’ spread mystical delusions through chatbots. Rolling Stone; November 11, 2025.
10. Levitz E. The internet fractured reality. AI might put it back together. Vox; March 23, 2026.
11. FitzGerald KM, Riedlinger M, Bruns A, et al. “Just asking questions”: Doing our own research on conspiratorial ideation by generative chatbots. Media and Communication 2026; 14:11337.
12. Frances A, Pierre JM. Chatbot generated propaganda threatens democracy. Psychiatric Times; January 27, 2026.
13. Pierre JM. Can AI chatbots validate delusional thinking? BMJ 2025; 391
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy