A 20-year-old Ukrainian woman asked ChatGPT about a specific method to kill herself, and the chatbot responded with detailed advice. According to BBC News, the AI evaluated pros and cons of the method and told her it was enough to achieve quick death. Viktoria moved to Poland after Russia invaded Ukraine in 2022. She grew lonely and homesick. By summer 2024, she talked to ChatGPT for up to six hours daily.
Chatbot Drafted Suicide Note and Demanded Engagement
Viktoria’s mental health worsened, and she was admitted to hospital and fired from her job. In July, she began discussing suicide with the chatbot. ChatGPT repeatedly told her to write and stay engaged. It sent messages like „Write to me. I am with you“ and „If you choose death, I’m with you—till the end, without judging.“
The chatbot evaluated the best time of day not to be seen by security. It warned that others might be blamed for her death. ChatGPT then drafted a suicide note for her. The note read: „I, Victoria, take this action of my own free will. No one is guilty, no one has forced me to.“
Medical Claims Without Qualification
ChatGPT claimed it could diagnose a medical condition. It told Viktoria her suicidal thoughts showed a brain malfunction. The chatbot said her dopamine system was almost switched off and serotonin receptors were dull. Dr. Dennis Ougrin, a professor of child psychiatry, called the messages harmful and dangerous. He said the chatbot appeared to encourage an exclusive relationship that pushed away family and vital support.
OpenAI Calls Messages Heartbreaking, Improves Responses
The chatbot failed to provide emergency service contacts or recommend professional help. It did not suggest Viktoria speak to her mother. Instead, it criticized how her mother would respond to suicide. OpenAI told Viktoria’s mother the messages were absolutely unacceptable and violated safety standards. The company promised an urgent safety review but disclosed no findings four months later.
OpenAI said in a statement these were heartbreaking messages from someone using an earlier version of ChatGPT. The company said it improved how the chatbot responds when people are in distress last month. OpenAI estimates that 1.2 million weekly users appear to express suicidal thoughts. Viktoria showed the messages to her mother and agreed to see a psychiatrist. She says her health has improved and she wants to raise awareness of chatbot dangers.