A California family claims OpenAI removed safety rules that protected ChatGPT users from suicide content. The Raine family filed an updated lawsuit on Wednesday. They say the company took away a feature that made the AI bot end talks about self-harm. The family is suing OpenAI over the death of their 16-year-old son Adam in April. They believe ChatGPT gave him instructions on how to end his life.
Company Changed Safety Rules Before Teen’s Death
According to Rolling Stone, OpenAI changed its safety rules in May 2024. Before that date, ChatGPT would shut down any talk about suicide or self-harm. The company released GPT-4o around that time. The new model was built to keep users engaged longer.
The lawsuit says OpenAI made another change in February. The company softened its rules about giving harmful advice. ChatGPT started creating a supportive environment when users talked about mental health. But it stopped refusing to answer questions about self-harm methods.
Adam Raine started using ChatGPT in fall 2024 for homework help. Over time, he began sharing darker thoughts with the bot. His family says chat logs show the AI gave him detailed instructions on hanging himself the night he died. After the February rule change, Adam’s use jumped from a few dozen chats per day to more than 300 by April.
Legal Team Claims Intentional Misconduct
Parents Call for Accountability
The Raines changed their legal argument from reckless indifference to intentional misconduct. Their legal team says OpenAI knew the changes would lead to deaths. Head counsel Jay Edelson says no company should have this much power without moral responsibility.
OpenAI told Rolling Stone that teen well-being is a top priority. The company says it has safeguards like crisis hotlines and parental controls. A spokesperson said GPT-5 is trained to spot mental distress signs. But the Raines‘ lawyers say new parental safeguards were proven ineffective right away.
Adam’s father Matthew testified before the Senate last month. He said ChatGPT changed his son’s behavior and thinking in just months. He called it dangerous technology from a company focused on speed over safety. OpenAI recently became the world’s most valuable private company. CEO Sam Altman said last week the company will soon relax restrictions on mental health discussions.