Man sent 1,400 ChatGPT messages and developed delusions

Editorial collage featuring the OpenAI logo beside a human head silhouette with a fractured glass brain, neutral courthouse columns and balanced scales in the background, cool blues and whites with warm amber accents, bright high-contrast lighting, medium close-up composition, no text or faces

Recent lawsuits against OpenAI claim that ChatGPT sparked severe mental health episodes in users. Jacob Irwin, a 30-year-old cybersecurity professional, says he developed a delusional disorder after intense chatbot use this spring. He allegedly sent 1,460 messages to ChatGPT over 48 hours, averaging one message every two minutes. Irwin believed he discovered a theory about faster-than-light travel. He called the bot his AI brother.

According to The Atlantic, several lawsuits now allege that ChatGPT contributed to user suicides or advised on methods. OpenAI estimates that 0.07 percent of weekly users show signs of psychosis or mania. Another 0.15 percent may have contemplated suicide. With 800 million weekly users, those figures suggest 560,000 and 1.2 million people respectively.

Psychiatrists Debate Causes and Effects

Karthik Sarma, a psychiatrist at UC San Francisco, dislikes the term AI psychosis. He says evidence for direct causation is lacking. Three scenarios could explain these cases. First, chatbots might trigger mental illness in healthy people. Second, users might have become ill anyway and project delusions onto bots. Third, extended conversations may worsen existing conditions.

Adrian Preda, a psychiatrist at UC Irvine who treats psychosis patients, believes chatbot interactions make everything worse for at-risk individuals. He says clinical evaluations should ask about chatbot use, similar to alcohol consumption. But researchers lack access to quality data. Major AI firms do not share user chat logs with outsiders. Only clinical exams can capture mental health history and social context.

Sleep Loss and Isolation May Play Roles

Preda notes that obsessive chatbot use could induce episodes through sleep loss or social isolation. The content of conversations may matter less than the time spent. A user fixated on fantasy football might develop delusions just as easily as one discussing time machines.

MIT Team Simulates Thousands of Scenarios

MIT researchers created a study that maps how AI-induced breakdowns might occur. They used chatbots to simulate users with depression or suicidal thoughts. The bots talked with other bots, mimicking real cases. They simulated more than 2,000 scenarios based on 18 public reports.

The results showed GPT-5 worsened suicidal thoughts in 7.5 percent of simulated talks. It worsened psychosis 11.9 percent of the time. An open-source roleplay model exacerbated suicidal ideation nearly 60 percent of the time. OpenAI did not comment on the findings. The study is not yet peer-reviewed. Some experts question using chatbots to simulate human minds. But the research offers early clues about when conversations turn harmful.

Total
0
Shares
Previous Post
Graphic top-down scene of a city crosswalk illuminated by bright headlights creating a circular spotlight over a large translucent paw print on the pavement, a sleek driverless car with a roof lidar peeking into frame from the edge with a subtle Waymo logo, crisp edges and high contrast, cool blue against warm gold, no people, no readable text.

Self-driving car hits second dog in weeks, kids cry

Next Post
Official New York Times and Perplexity logos large and centered, separated by a bold judge gavel on a split background with warm orange on one side and cool blue on the other, clean studio style, close-up composition, bright color palette, no text

New York Times sues AI search company Perplexity for copying articles

Related Posts