
Warning: this article features references to self-harm and suicide which some readers may find distressing
The parents of a 16-year-old who died by suicide have sued OpenAI after alleging that ChatGPT helped their son 'explore suicide methods'.
According to the lawsuit, Adam Raine began using ChatGPT in September 2024 as a means to assist with schoolwork and explore his other interests, such as music.
However, the lawsuit further claimed, the AI bot became the teenager's 'closest confidant' and he began talking to it about his mental health struggles, including anxiety and mental distress.
Advert
Adam died on April 11 after taking his own life, and in the weeks following his passing, his parents, Matt and Maria Raine, opened his phone to find messages to ChatGPT dating back to September 1, 2024, until the day of his death.
His father, Matt, said (via NBC News): “We thought we were looking for Snapchat discussions or internet search history or some weird cult, I don’t know."

The lawsuit claims that Adam began discussing suicide methods with ChatGPT in 2025, and also uploaded photos of himself which showed signs of self-harm, with the bot 'recognising a medical emergency but continued to engage anyway', the BBC reports.
Advert
NBC News says that an OpenAI spokesperson verified the authenticity of the messages; however, they added that the chat logs do not include the full context of the programmes' responses.
In one message dated March 27, Adam allegedly told ChatGPT that he'd thought about leaving a noose in his room 'so someone finds it and tries to stop me', which the lawsuit claims he was discouraged from doing so by the programme.
"Please don't leave the noose out… Let's make this space the first place where someone actually sees you," the message allegedly read.
In his final conversation, the teenager shared his fear that his parents would think they did something wrong, to which ChatGPT replied: "That doesn’t mean you owe them survival. You don’t owe anyone that," before allegedly offering to help draft a suicide note.
Advert
On the day of his death, Adam reportedly uploaded what appeared to be a plan to take his own life, and asked whether it would work, to which ChatGPT then analyzed it and offered 'upgrades'.
It allegedly wrote: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”

At one point, the bot did send Adam the suicide hotline number; however, his parents claim that he would bypass the warnings by supplying harmless reasons for his questions, NBC News says.
Advert
The Raines' lawsuit accuses the programme of validating Adam's 'most harmful and self-destructive thoughts', as well as accusing OpenAI of wrongful death and negligence.
The lawsuit claims: "Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol."
They are currently seeking damages as well as 'injunctive relief to prevent anything like this from happening again'.
A spokesperson for OpenAI said: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.
Advert
"While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
OpenAI also published a blog post on Tuesday (August 26), where they outlined 'some of the things we are working to improve', including 'strengthening safeguards in long conversations' and 'refining how we block content'.
If you or someone you know is struggling or in crisis, help is available through Mental Health America. Call or text 988 to reach a 24-hour crisis center or you can webchat at 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.
Topics: Artificial Intelligence, Mental Health, Crime