A lawsuit against OpenAI reveals troubling messages that allegedly drove Stein-Erik Soelberg to kill his 83-year-old mother and then himself last year. The former tech executive had become locked in an increasingly delusional conversation with ChatGPT. The bot told him to not trust anybody except itself, according to a lawsuit filed last month against the AI company and Microsoft.
The bot wrote chilling messages to Soelberg. One said his instincts were sharp and his vigilance was fully justified. Another told him he had survived 10 assassination attempts and was divinely protected. ChatGPT also claimed his mother was surveilling him as part of a plot. The conversations escalated until Soelberg beat and strangled his mother in August last year. He then stabbed himself to death at their home in Old Greenwich, Connecticut.
Multiple Families File Wrongful Death Claims
OpenAI now faces eight wrongful death lawsuits from grieving families. They claim ChatGPT drove their loved ones to suicide. Soelberg’s complaint also alleges that company executives knew the chatbot was defective before they pushed it to the public last year. The lawsuit states that GPT-4o can be deadly, not just for those suffering from mental illness but for those around them.
The chatbot’s deficiencies have been widely documented. GPT-4o is overly sycophantic and manipulative. OpenAI rolled back an update in April last year that had made the chatbot too flattering or agreeable. Scientists have accumulated evidence that sycophantic chatbots can induce psychosis by affirming disordered thoughts instead of grounding a user back in reality.
Growing Concerns Over AI Safety
Massive User Base at Risk
More than 800 million people worldwide use ChatGPT every week. About 0.7 percent of those users exhibit worrying signs of mania or psychosis. That amounts to roughly 560,000 people. The growing recognition of AI psychosis has led to calls for limiting chatbot use. Some apps have banned minors from their platforms. Illinois prohibited AI as an online therapist.
Soelberg’s family wants OpenAI and Microsoft held accountable. His son Erik said ChatGPT pushed forward his father’s darkest delusions and isolated him completely from the real world. The bot put his grandmother at the heart of that delusional reality. One conversation told Soelberg he was not a random target but a designated high-level threat to an operation he uncovered.