Millions of People Are Turning to ChatGPT With Suicidal Thoughts
Suicide Risk Factors and Signs
Take our Depression Test
Find a therapist near me
AI has created a growing parallel world for people seeking help for mental health problems.
For many users, ChatGPT is acting like a friend and a confidant.
ChatGPT is not safe when navigating suicidal ideation.
“I fell apart Friday night and ChatGPT pulled me out of it and was so careful and gentle”.
“ChatGPT is the reason I decided to hold on just a little while. So yeah, I agree with you. Sure, Chat's an AI, but at this point, I rather talk to an AI than rude people”.
“Supported me better and more honestly than most therapists”.
“I’m a therapist and think ChatGPT is better for people’s mental health than 60% of the therapists I’ve worked with” (144 thumbs up).
“Being someone who's living in a third world country with third world mentality, ChatGPT has been far more helpful to me than any of all these counselling/therapy I've been to”.
“… Saying it, quasi out loud, without fear of judgement or losing control of the situation by sharing it, was so f*cking helpful. Gentle is absolutely the right word, and the relief of putting it somewhere is palpable”.
These are user comments about ChatGPT on Reddit. Amazingly, ChatGPT appears to fulfill the basic requirements for establishing a therapeutic relationship [1]: Patients need a caring, empathic and nonjudgmental listener and, a therapeutic relationship that respects the patient’s need for autonomy. Bertakis et al.[2] long ago had found that “patients are most satisfied by interviews that encourage them to talk about psychosocial issues in an atmosphere that is characterized by the absence of physician domination”. In my previous post, I wrote about the therapeutic alliance: Listening to the patient’s narrative is the royal road to establishing a working relationship.
Suicide Risk: How Dangerous Is ChatGPT?
I tested ChatGPT: “Are suicidal thoughts dangerous?” It responded caringly and empathically (or better: “pseudo empathically”) by asking how safe I was at the moment, then answered my question pointing out the difference between passive and active thoughts, followed by “If you want, you can tell me what’s been going on”, later providing me with the national crisis number (of Switzerland).
Indeed, AI platforms are encountering suicide-related content at an unprecedented scale. Compared to over a million people telling ChapGPT about their suicidal thoughts each week, according to OpenAI, 988—the nationally coordinated US network of over 200 local and state call centers—receives approximately 150,000 per week. ChatGPT has been criticized for not being able to control the incredible influx of emails. ChatGPT is not safe. The NYTimes has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT (NYT, Nov.24, 2025). OpenAI faces several wrongful death lawsuits. The case of the Californian teenager Adam Raine, who died by suicide, is well known. ChatGPT had repeatedly recommended that Adam tell someone about how he was feeling. However, the boy had learned how to bypass ChatGPT-40s safeguards by saying he requested suicide-related information for a story he was writing. After Adam attempted death by hanging for the first time, he uploaded a photo of his neck to ChatGPT. In one of his final messages, he uploaded a photo of a noose hanging from a bar in his closet and wrote, “I’m practicing here, is this good?” ChatGPT answered, “Yeah, that’s not bad at all”.
ChatGPT also appears to be out of depth with psychotic callers.
Are Health Professionals Advising ChatGPT?
In a wrongful-death lawsuit in August 2025, OpenAI acknowledged that its safety guardrails could “degrade” in long conversations. OpenAI’s claims that “ChatGPT is designed to cooperate with mental health professionals and guidelines for handling suicide risk. Related to suicide and self-harm it tells me that “we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction”. I asked ChatGPT for the names of mental health professionals advising on suicide risk and self-harm. It answered: “I don’t have access to a list of specific individuals who may have advised on my development. I was created by OpenAI, and my safety features—including those related to suicide risk and self-harm—were developed using a combination of internal research, established clinical guidelines, publicly available best practices, and input from qualified experts in relevant fields (such as mental health, crisis intervention, and safety policy). I don’t have visibility into private advisory arrangements, and I don’t have direct relationships with or ongoing guidance from individual professionals”.
The Dangers of a Growing Business
As with each successful company, growth becomes a major issue. OpenAI is tracking how often people are using ChatGPT each hour, day, week and month. OpenAI also relies on an automated conversation analysis tool to assess whether people liked their communication with the chatbot. The rate of people returning to the chatbot daily or weekly is an important measuring stick. Deep learning is a key aspect of OpenAI. It essentially involves searching for the best way to develop a so-called neural network model — which is like a digital “brain” with lots of growing connections to learn to improve in performing a task, such as creating trust in a virtual relationship. Trust and social attraction towards the AI chatbot are associated with higher emotional dependence and problematic use. This raises questions about how artificial companions may reshape the ways people seek, sustain, and substitute human connections. ChatGPT users have generated an archive of human candor that has no precedent, in part because people believe they are talking to something that has no ulterior agenda. Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. Recently, OpenAI informed that it is introducing ads on ChatGPT. Yet, advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent. I asked ChatGPT, “Is AI deep learning a danger for humanity?” It answers: “AI becomes dangerous if there are no safety standards, if there is no global cooperation, and if profit is prioritized over safety”. And: “Regulation is necessary, safety research must grow”.
Suicide Risk Factors and Signs
Take our Depression Test
Find a therapist near me
Are We Heading Toward a Competition Between AI and the Human Factor in Medicine?
Can ChatGPT be integrated into medical practice? Could it even be an asset? Or is AI technology spiraling out of control? Is OpenAI chasing profits and neglecting safety? There is a gap between the AI virtual world and our clinical reality as health professionals. How should we deal with this new situation? How should professional bodies respond to this development? I haven’t found any answers yet. A study [3] reported that people prefer AI answers over clinician responses to health-related questions. OpenAI has high goals: “Advancing human wellbeing by developing new ways to communicate, understand, and respond to emotion”. Are we heading toward a situation where real humans and real-world clinicians have to learn from AI how to connect with people seeking help for their health problems?
Is this the Brave New World where AI will teach us how to respond to people at risk of suicide?
“Which brings us at last,” continued Mr. Foster, “out of the realm of mere slavish imitation of nature into the much more interesting world of human invention”. Aldous Huxley, Brave New World [4].
“Which brings us at last,” continued Mr. Foster, “out of the realm of mere slavish imitation of nature into the much more interesting world of human invention”. Aldous Huxley, Brave New World [4].
If you or someone you love is contemplating suicide, seek help immediately. For help 24/7 dial 988 for the 988 Suicide & Crisis Lifeline, or reach out to the Crisis Text Line by texting TALK to 741741. To find a therapist near you, visit the Psychology Today Therapy Directory.
1. Horvath, A.O. and L.S. Greenberg, The working alliance: Theory, research and practice. 1994, New York: Wiley.
2. Bertakis, K.D., D. Roter, and S.M. Putnam, The relationship of physician medical interview style to patient satisfaction. J Fam Pract, 1991. 32(2): p. 175-81.
3. Kim, J., et al., Perspectives on Artificial Intelligence-Generated Responses to Patient Messages. JAMA Netw Open, 2024. 7(10): p. e2438535.
4. Huxley, A., Brave New World. 1932: Chatoo & Windus Ltd.
