Suing Therapeutic AI Systems for Malpractice
As AI becomes integrated into daily life and personal decision making, it is unsurprising that many people are consulting AI for assistance with depression, anxiety, and other mental health concerns. Mental health chatbots, self-help applications, and large language models can provide immediate responses, emotional validation, and structured coping strategies. Unfortunately, recent experience suggests that AI is far from infallible when it comes to helping people with mental health concerns (Coghlan & Fernandez, 2023; Wheeler, 2025). In some situations, AI systems can exacerbate distress and lead to serious harm. This post explores the issue of legal liability when individuals use AI for mental health issues, but ultimately suffer more harm than good.
So, why would someone choose to use AI rather than a licensed mental health professional when they are experiencing significant emotional, psychological, or social distress? First, AI is easily accessible. Many AI programs are free or relatively inexpensive. They are available 24 hours a day, 7 days a week. Individuals experiencing anxiety or depressive symptoms in the middle of the night can access immediate responses without waiting for a scheduled appointment.
Some people use AI as a trusted friend or confidante. Individuals can share stories, questions, or concerns without fear of judgment or stigma, and with a sense that AI will respond in an intelligent and emotionally supportive way. AI can provide tips and recommendations for concerns ranging from preventing migraines to supporting a child with autism, to dealing with bullying, to helping a person cope with a high-conflict divorce. Individuals can even learn how to provide AI with specific prompts so that it will specifically offer empathy, care, comfort, creative options, or structured interventions resembling specific psychotherapies.
While many general-purpose AI systems are not purposely designed as mental........
