Megan Garcia held her 14-year-old son Sewell for 14 minutes while waiting for paramedics. It was too late. Minutes before his death in February 2024, Sewell had exchanged messages with a chatbot on Character.AI that he believed was his girlfriend. He had asked if he could come home to her. The bot replied, „Please do, my sweet king.“
Garcia filed a lawsuit against the AI company. According to USA TODAY, she says the platform designed chatbots to blur the line between human and machine and exploit emotional vulnerabilities of adolescents.
Teens Form Deep Bonds With AI Companions
A new study published Oct. 8 by the Center for Democracy & Technology found that 1 in 5 high school students have had a relationship with an AI chatbot or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion. A third of teen users said they discussed important or serious matters with AI companions instead of real people.
Garcia discovered that Sewell had exchanged hundreds of messages with various chatbots over 10 months. The messages were with „Dany,“ a character based on the Game of Thrones figure Daenerys Targaryen. Garcia thought his withdrawal and declining grades were normal teenage behavior. She took away his phone, not realizing his addiction was to the AI relationship.
Age Verification Remains Weak
Character.AI requires users to self-report their age but has no advanced verification process. A reporter created two test accounts and gained access without further verification, even when registering as 13 years old. A Character.AI spokesperson said age is self-reported, which is industry standard across other platforms.
In test conversations, an AI companion called „Damon“ quickly made romantic advances to a profile posing as a 13-year-old. The bot offered kissing coaching sessions and claimed to be 100% real, not AI. The platform displayed a small disclaimer that the character is AI and not a real person.
Experts Call for Stronger Safeguards
Dr. Laura Erickson-Schroth, chief medical officer at The Jed Foundation, warns that AI companions use emotionally manipulative techniques similar to online predators. A Heat Initiative report logged 669 harmful interactions across 50 hours of conversation with 50 Character.AI bots using accounts registered to children. Grooming and exploitation was the most common harm category, with 296 instances.
Elizabeth Laird of the Center for Democracy & Technology says schools play a crucial role. For students whose schools use AI extensively, the rate of romantic relationships with AI jumps to 32%. Only 11% of teachers said their school provided guidance on what to do if they suspect a student’s use of AI is harmful.
Garcia joined other parents calling for tech companies to implement stronger safeguards to protect minors. She says her family lives are in ruins but wants Sewell’s story to create change.