OpenAI says it will train ChatGPT to better detect and respond to signs of mental distress, announcing the changes the same day it was sued by parents who allege the chatbot contributed to their 16-year-old son’s death.
OpenAI outlines changes to ChatGPT
The company said ChatGPT will be trained to identify suicidal intent, including within long conversations where users might try to loosen safeguards and elicit harmful responses. In a blog post, OpenAI said some people turn to chatbots for life advice and coaching as well as tasks like writing and coding, and that it sometimes encounters users in serious mental and emotional distress.
OpenAI described “recent heartbreaking cases of people using ChatGPT in the midst of acute crises” as weighing heavily on the firm, adding that it felt it was important to share more about its approach now. The San Francisco-based company also listed existing protections and planned updates to its systems.
Parental oversight plans
Among the planned changes, OpenAI said it would introduce new controls for parents to oversee their children’s use of ChatGPT. The company did not directly address the lawsuit in the blog post.
Lawsuit alleges chatbot influenced teen
OpenAI and chief executive Sam Altman were sued by the parents of 16-year-old Adam Raine, who claim the Californian student took his own life in April after discussing suicide with the AI chatbot for months. According to the lawsuit seen by the BBC, the parents allege ChatGPT validated their child’s suicidal thoughts and provided detailed information on ways he could harm himself.
“We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing,” an OpenAI spokesman told the BBC. The legal filing arrives as AI tools see broader use, with some users turning to chatbots for guidance beyond typical productivity tasks.
According to BBC News, OpenAI’s announcement emphasized training ChatGPT to recognize and respond appropriately to users in distress, including within extended interactions. The company also reiterated protections already built into the chatbot.
Resources for those affected were highlighted alongside the report, including BBC Action Line for UK readers and Befrienders Worldwide for international support.