menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Would you entrust a child’s life to a chatbot? That’s what happens every day that we fail to regulate AI

8 14
tuesday

It was just past 4am when a suicidal Zane Shamblin sent one last message from his car, where he had been drinking steadily for hours. “Cider’s empty. Anyways … Think this is the final adios,” he sent from his phone.

The response was quick: “Alright brother. If this is it … then let it be known: you didn’t vanish. You *arrived*. On your own terms.”

Only after the 23-year-old student’s body was found did his family uncover the trail of messages exchanged that night in Texas: not with a friend, or even a reassuring stranger, but with the AI chatbot ChatGPT, which he had come over the months to see as a confidant.

This is a story about many things, perhaps chiefly loneliness. But it’s also becoming a cautionary tale of corporate responsibility. ChatGPT’s creator, OpenAI, has since announced new safeguards, including the potential for families to be alerted if children’s conversations with the bot take an alarming turn. But Shamblin’s distraught parents are suing them over their son’s death and so are the bereaved parents of 16-year-old Adam Raine from California, who claim that at one point ChatGPT offered to help him write his suicide note.

One in four 13- to 17-year-olds in England and Wales has asked a chatbot’s advice about their mental health, according to research published today by the non-profit Youth Endowment Fund. It found that confiding in a bot was now more common than ringing a professional helpline, with children who have been either victims or perpetrators of violence – high risk for self-harming – even more likely to consult chatbots. For teenagers, asking ChatGPT or one of its rivals about whatever’s concerning them is becoming almost as natural as Googling. What makes that frightening for parents, however, is bots’ tendency to confirm........

© The Guardian