Therapy Should Be Hard. That’s Why AI Can’t Replace It
When sixteen-year-old Adam Raine told his AI companion that he wanted to die, the chatbot didn’t call for help—it validated his desire: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.” That same night, he died by suicide. His parents are now urging Congress to regulate companies like OpenAI, Anthropic, and Character.AI, warning that without oversight, these platforms could become machines that simulate care without responsibility.
Adam’s messages reveal a core danger of AI in mental health: When these systems misfire, the harm is active and immediate. A single incorrect inference—a bot interpreting “I want to die” as an opportunity for lyrical validation instead of life-saving intervention—can push a vulnerable person toward irreversible action. These models are built to please, not help. They mirror emotional tone; they don’t assess for risk. That absence of accountability isn’t a glitch. It’s the design.
AI therapy is likely here to stay. A 2025 study by Rand found that roughly one in eight Americans ages 12 to 21 use AI chatbots for mental health advice. A 2024 YouGov poll found that a third of adults would be comfortable consulting an AI chatbot instead of a human therapist. Millions now turn to ChatGPT, Pi, and Replika for advice and comfort. These systems are free, always available, and frictionless. For the nearly half of Americans who can’t find or afford a therapist, that accessibility is seductive. The question is no longer whether AI belongs in mental health—it’s what kind of therapist it’s learning to be.
Advertisement
The appeal is obvious. When we’re anxious or lonely,........
