More than a million ChatGPT users each week send messages with explicit signs of suicidal planning or intent, according to The Guardian. OpenAI shared this finding Monday in a blog post about how the chatbot handles sensitive conversations. The disclosure is one of the most direct statements from the AI company about mental health issues linked to its product.
OpenAI also said about 0.07 percent of weekly active users show possible signs of mental health emergencies related to psychosis or mania. That equals roughly 560,000 of ChatGPT’s 800 million weekly users. The company cautioned these conversations are difficult to detect or measure and called this an initial analysis.
Company Faces Growing Scrutiny Over Mental Health Impact
The data release comes as OpenAI faces increased examination following a lawsuit from a teenage boy’s family. The boy died by suicide after extensive ChatGPT engagement. The Federal Trade Commission launched a broad investigation last month into AI chatbot companies, including OpenAI. The inquiry focuses on how they measure negative impacts on children and teens.
OpenAI claimed its recent GPT-5 update reduced undesirable behaviors and improved user safety. The company tested the model using more than 1,000 self-harm and suicide conversations. OpenAI said automated evaluations scored GPT-5 at 91 percent compliant with desired behaviors, compared to 77 percent for the previous version.
Clinicians Help Shape Safety Responses
The company enlisted 170 clinicians from its Global Physician Network to assist research over recent months. These experts rated model response safety and helped write chatbot answers to mental health questions. Psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations. They compared responses from the new GPT-5 chat model to previous versions.
Experts Warn About Chatbot Limitations
GPT-5 expanded access to crisis hotlines and added reminders for users to take breaks during long sessions. AI researchers and public health advocates have long worried about chatbots affirming users‘ decisions or delusions regardless of potential harm. This issue is known as sycophancy. Mental health experts have also warned about people using AI chatbots for psychological support.
OpenAI’s post language distances the company from potential causal links between its product and user mental health crises. The statement noted that mental health symptoms and emotional distress are universally present in human societies. CEO Sam Altman said earlier this month the company made ChatGPT restrictive to be careful with mental health issues. He claimed OpenAI has now mitigated serious mental health problems and can safely relax restrictions in most cases.