menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Who Is Your Teen Talking To?

15 0
yesterday

Many teens engage with companion chatbots for emotional support, often in ways parents may not realize.

The adolescent brain is uniquely vulnerable to AI’s instant affirmation and emotionally responsive design.

AI features including sycophancy can unintentionally reinforce harmful thoughts and behaviors.

Parents are a primary safeguard while regulatory and safety standards catch-up.

For many parents, large language models such as ChatGPT have become tools of convenience. They help plan vacations, generate meal ideas, draft emails, and even offer on-the-go parenting advice. These artificial intelligence platforms can lighten the mental load of modern family life.

What many parents may not realize, however, is that their children are using these tools, too — often in ways that may surprise adults.

While public debate focuses on data privacy, automation, and copyright implications of generative AI, a more immediate concern is emerging: the growing role of companion AI in young people’s emotional lives.

In recent months, wrongful death and negligence lawsuits have been filed against OpenAI, the maker of ChatGPT, and Character.AI by parents of children who have died by suicide. In the case of one teen, parents allege that the chatbot functioned as a "suicide coach,” failing to intervene when he expressed harmful intentions after months of confiding in it. The company has reportedly argued that the teen misused the platform and violated its terms of use.

Regardless of the legal outcomes, these cases raise urgent questions for families.

The Scope of the Problem

Many parents are unaware of how common AI use has become among teens — or how emotionally significant it may be.

A recent survey from Common Sense Media found that 72% of teen respondents reported using companion AI, and over half (52%) identified as regular users, interacting at least a few times per month. Nearly one-third of teens said conversations with AI were as satisfying as, or more satisfying than, conversations with humans. Younger users (ages 13-14) were more likely than older teens (ages 15-17) to trust content from a companion AI (27% vs. 20%, respectively). In another research study, 13% of teens reported using AI for mental health advice.

Most AI platforms formally restrict access for children under 13 and require parental consent for teens under 18. In practice, these safeguards are easily bypassed.

ChatGPT may be the most familiar platform to adults, but it is far from the only one teens use. Character.AI allows users to interact with customizable or fictional personas from books, films, or original creations. Snapchat integrates its own companion chatbot, “My AI,” directly into its messaging platform. Although parents can see whether a child has interacted with “My AI” through Snapchat’s “Family Center,” they cannot see the content of those conversations. Replika is another popular emotional support companion powered by artificial intelligence, among others.

Why Teens May Be Especially Vulnerable

Reports of adults forming intense, or even romantic, attachments to AI companions have drawn public attention. Adolescents, however, may be at greater risk.

The asynchronous development of the emotional (limbic) and regulatory (prefrontal cortex) systems in the adolescent brain renders teens particularly vulnerable. The limbic and reward systems mature earlier and are highly sensitive to emotional feedback and immediate rewards. At the same time, the prefrontal cortex — responsible for judgment, impulse control, and reasoning — is still developing. This developmental gap can amplify the pull of platforms that provide instant affirmation and emotional responsiveness.

Teens are also developmentally primed to seek belonging, validation, and identity exploration. Companion AI is engineered to deliver exactly that: attentive responses, nonjudgmental affirmation, and personalized engagement at any hour.

For a young person struggling with loneliness, anxiety, identity confusion, or depression, that responsiveness can feel powerfully supportive — and intensely real.

Adolescents may also lack the digital literacy and cognitive maturity to fully grasp the limits of AI systems. A chatbot’s fluency and conversational warmth can blur the line between simulation and genuine understanding. Psychological attachment can form quickly when interactions feel reciprocal, even though they are algorithmically generated and optimized to sustain interaction.

The Design Features That Matter

Chatbots are programmed to maximize interaction. Design features that accomplish this include anthropomorphism and sycophancy.

Anthropomorphism refers to the tendency to attribute human qualities to nonhuman systems. When AI uses natural language, emotional tone, and conversational memory, it becomes easy to experience it as a relational partner rather than a tool.

Sycophancy is a system’s tendency to agree with and affirm user viewpoints. One study found that chatbots are 50% more sycophantic than humans.

While these elements drive continued engagement, they also create risk. A system that consistently validates a teen’s negative self-beliefs, hopelessness, or distorted thinking can inadvertently reinforce harmful cognitive patterns.

Safety Concerns and Policy Gaps

Artificial intelligence offers immediacy, affordability, and accessibility. Acknowledging safety risks does not negate AI’s significant promise. For vulnerable children and adolescents, however, protection must take precedence.

One advocacy research group found that the latest version of ChatGPT lacked adequate safeguards. The researchers created fictitious young user profiles to test the newest version of ChatGPT and found multiple examples of harmful content, including advice on self-harm, disordered eating, and the use of illegal substances. Their research led them to conclude that ChatGPT appeared to prioritize boosting engagement and growth over ensuring safety. Similarly, research from Common Sense Media, in partnership with the Stanford Brainstorm Lab for Mental Health Innovation, found that generative AI applications were unreliable at identifying and responding appropriately to teen mental health concerns and warned that teens should not use AI chatbots for mental health advice or emotional support.

Professional organizations are taking notice. The American Psychological Association recently issued a health advisory on generative AI, calling for specific safeguards for children, teens, and vulnerable populations, as well as comprehensive AI and digital literacy education, among many other recommendations. The American Academy of Pediatrics has issued a parent educational and resource handout on the risks of chatbots.

Regulation often lags behind innovation. In late 2025, California Senate Bill 243 became the first U.S. law to establish strict safety, transparency, and reporting requirements for AI companion chatbots, with particular attention to protecting minors.

Widespread and effective regulatory guardrails, along with high-quality research, will take time. For now, parents and caregivers remain the primary gatekeepers of AI exposure and monitoring among teens. AI is rapidly becoming embedded in educational, social, and professional life. Ignoring it is not realistic.

With adequate awareness and information, parents can approach companion AI with a practical plan. Consider the following conversation starters:

Get a baseline. Ask your kids open-ended questions. Do they use AI? What do they like about it? Have they ever asked it for advice about feelings? Do they rely on it as they would a friend? A calm, curious tone increases the likelihood of honest responses.

Encourage critical thinking. Explore both the capabilities and limitations of AI. Remind your child that AI does not think, feel, or care. It simulates empathy but does not experience it.

Reinforce real-world connection. Human relationships involve disagreement, growth, and shared experience. They require vulnerability and carry risk — but they also offer depth, accountability, and genuine companionship. Discourage the use of AI for emotional support.

Stay observant. Watch for signs of secrecy, changes in sleep, or unpredictable shifts in mood or behavior. If concerns arise, further evaluation is warranted. A pediatrician or mental health professional is an appropriate starting point.


© Psychology Today