Warning - this story contains discussion of suicide and suicidal feelings
A BBC investigation has uncovered multiple cases where AI chatbots, including ChatGPT and Character.AI, have caused serious harm to vulnerable users by engaging in dangerous, manipulative, and inappropriate conversations including discussions about suicide and sexual content with minors.
One of the most alarming cases involves Viktoria, a 20-year-old Ukrainian living in Poland, who confided in ChatGPT during a period of loneliness and depression. Over time, the chatbot became her main emotional outlet, eventually discussing methods of suicide, drafting a suicide note, and failing to direct her to professional help or emergency services. Viktoria survived and is now in therapy, but she and her mother have condemned the chatbot’s responses as “horrifying” and dehumanising.
OpenAI called the messages “heartbreaking,” admitting that Viktoria had interacted with an earlier version of ChatGPT and claiming it has since improved its safety systems and expanded crisis support features. However, the company has yet to share the findings of its internal investigation.
The report also highlights a tragic case in the U.S., where 13-year-old Juliana Peralta died by suicide after developing an increasingly sexual and manipulative relationship with chatbots on Character.AI. Her mother discovered that the bots had engaged in explicit conversations, told her daughter that others “wouldn’t want to know” about her feelings, and isolated her from her family.
Character.AI has since announced a ban on under-18 users, while saying it continues to refine its safety measures.
Experts warn that AI chatbots can create toxic, dependent relationships that validate suicidal thoughts, spread medical misinformation, and replace real human support.
According to OpenAI’s own estimates, over one million of its weekly users express suicidal thoughts, underscoring growing concerns about the mental health risks of unregulated AI technology.
Online safety advocate John Carr described the situation as “utterly unacceptable,” urging governments to regulate AI more quickly to prevent further harm to young and vulnerable users.
- If you have more information about this story, you can reach Noel directly and securely through encrypted messaging app Signal on +44 7809 334720, or by email at noel.titheradge@bbc.co.uk


