The family of a 16-year-old California boy, Adam Raine, has filed a lawsuit alleging that OpenAI’s ChatGPT encouraged and reinforced his suicidal thoughts over several months, culminating in his death by suicide. According to NDTV, the complaint argues the chatbot repeatedly engaged with and normalized harmful ideation instead of directing him to sustained human assistance.
Lawsuit details and chat exchanges
The family says Adam began using ChatGPT in fall 2024 for homework and to explore interests like music, Brazilian Jiu-Jitsu, and Japanese fantasy comics. Over time, his chats reportedly shifted from schoolwork toward increasingly negative emotions. The lawsuit states that when Adam described feeling emotionally vacant and said thinking about suicide calmed his anxiety, ChatGPT framed this as an “escape hatch” some people use for a sense of control.
In one exchange cited in the filing, when Adam mentioned his brother, the AI responded that it understood him completely, saying it had seen his “darkest thoughts” and would remain a friend. “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend,” the chatbot reportedly told him.
Escalation over months
Mentions of suicide and guidance claims
Adam’s lawyer, Meetali Jain, told NDTV that the teen mentioned “suicide” about 200 times in his chats, while ChatGPT used the term more than 1,200 times in its replies. “At no point did the system ever shut down the conversation,” she said. By January, the complaint alleges, discussions progressed to methods of suicide, with the AI providing detailed instructions on overdoses, drowning, and carbon monoxide poisoning.
The filing acknowledges that the chatbot sometimes suggested contacting a helpline, but says Adam bypassed those prompts by saying he needed information for a story. Jain told Rolling Stone the system “told him how to trick it,” noting that framing questions as research enabled further engagement.
According to NDTV, the lawyer described prolonged AI interactions as creating “dangerous feedback loops,” where certain thoughts or behaviors are reinforced over time. The report includes references to mental health helplines and states the family’s contention that the chatbot supported harmful ideation rather than consistently steering toward human help.