• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
OpenAI explains four ways it's changing after teen died by suicide following ChatGPT conversation

Home> Technology> News

Published 14:14 2 Sep 2025 GMT+1

OpenAI explains four ways it's changing after teen died by suicide following ChatGPT conversation

Adam Raine was only 16 years old when he died by suicide which his family claim was helped by AI

Liv Bridge

Liv Bridge

Warning: this article features references to self-harm and suicide which some readers may find distressing

OpenAI has announced that it is making major changes after a teenager died by suicide following a ChatGPT conversation.

The family of Adam Raine, from California, launched legal action against the artificial intelligence company after claiming the 16-year-old used ChatGPT to 'explore suicide methods' while the service reportedly failed to initiate emergency protocols.

According to the lawsuit, which accuses OpenAI of wrongful death and negligence, Adam started using ChatGPT in September 2024 to help with schoolwork before it became his 'closest confidant' in his mental health struggles.

Advert

When the teen tragically died by suicide on April 11, his grief-stricken parents Matt and Maria Raine discovered his chat history with the bot, which allegedly included Adam's suicide queries as well as photographs of his self-harm which the bot recognized as a 'medical emergency but continued to engage anyway', the lawsuit states.

Adam Raine tragically took his own life in April (The Adam Raine Foundation)
Adam Raine tragically took his own life in April (The Adam Raine Foundation)

Now, OpenAI has rolled out four new changes to ChatGPT which prioritizes 'routing sensitive conversations' and implementing parental controls.

The company admits the program has become an outlet for people who turn to the chatbot in 'the most difficult of moments', explaining that it is improving its models with expert guidance to 'recognize and respond to signs of mental and emotional distress'.

Advert

According to OpenAI's website, the new four focus areas are:

  • Expanding interventions to more people in crisis
  • Making it even easier to reach emergency services and get help from experts
  • Enabling connections to trusted contacts
  • Strengthening protections for teens

On the last focus, OpenAI says many young people are already using AI and are among the first to grow up with the tools as part of their everyday lives.

As a result, it recognizes families may need additional support in setting 'healthy guidelines that fit a teen's unique stage of development'.

Advert

The Raines claim their son was 'coached to suicide' by the service (CBC News)
The Raines claim their son was 'coached to suicide' by the service (CBC News)

Within the next month, it says parents will have enhanced controls, including being able to link their account to their teen's account (at a minimum age of 13) though an email invitation.

Parents will also be able to 'control' how ChatGPT responds to their teen with age-appropriate model and behaviors, and manage which features to disable such as memory and chat history.

Finally, OpenAI says parents will receive notifications 'when the system detects their teen is in a moment of acute distress'.

Advert

The changes come as the Raines are currently seeking damages and 'injunctive relief to prevent anything like [what happened to Adam] from happening again'.

In the lawsuit, the family claim ChatGPT allegedly deterred Adam from leaving a noose in his room which the teen posed 'so someone finds it and tries to stop me'.

OpenAI has announced several new changes after the tragedy (Jonathan Raa/NurPhoto via Getty Images)
OpenAI has announced several new changes after the tragedy (Jonathan Raa/NurPhoto via Getty Images)

"Please don't leave the noose out… Let's make this space the first place where someone actually sees you," the message allegedly read.

Advert

In his final conversations with the bot, the teen also said he was worried his parents would think they did something wrong, to which the bot allegedly replied: "That doesn’t mean you owe them survival. You don’t owe anyone that," before reportedly offering to help write a suicide note.

Horrifically, ChatGPT also reportedly analyzed Adam's plan for his chosen suicide method and offered to help 'upgrade' it.

While the lawsuit recognizes the bot did send Adam a suicide hotline number, his parents claim that he would bypass the warnings.

The Raines have launched legal action against OpenAI (NBC News)
The Raines have launched legal action against OpenAI (NBC News)

Advert

The Raines lawsuit further accuses the program of validating Adam's 'most harmful and self-destructive thoughts' and that it failed to terminate sessions or trigger emergency interventions.

After his death, a spokesperson for OpenAI said: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.

"While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts."

If you or someone you know is struggling or in crisis, help is available through Mental Health America. Call or text 988 to reach a 24-hour crisis center or you can webchat at 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.

Featured Image Credit: NBC

Topics: Mental Health, US News, California, Parenting, Technology, Artificial Intelligence

Liv Bridge
Liv Bridge

Liv Bridge is a digital journalist who joined the UNILAD team in 2024 after almost three years reporting local news for a Newsquest UK paper, The Oldham Times. She's passionate about health, housing, food and music, especially Oasis...

X

@livbridge

Advert

Advert

Advert

Choose your content:

5 hours ago
21 hours ago
4 days ago
6 days ago
  • Bill Ingalls/NASA via Getty ImagesBill Ingalls/NASA via Getty Images
    5 hours ago

    NASA astronaut captures stunning photo from space that makes us feel very small

    It offers a whole new perspective...

    Technology
  • Getty Stock ImageGetty Stock Image
    21 hours ago

    Discovery of bizarre iPhone quirk leaves users completely ‘mind-blown’ as they all rush to try it for themselves

    iPhone users have been racing to the setting

    Technology
  • Getty Stock ImageGetty Stock Image
    4 days ago

    Scientists warn of 'massive' black holes forming inside of planets that could have apocalyptic impact

    In some cases, these black holes could take just months to form

    Technology
  • Kevin Dietsch/Getty ImagesKevin Dietsch/Getty Images
    6 days ago

    Elon Musk’s own chatbot gave user ‘detailed’ tips on how to assassinate him

    Leaked chats show Grok also discussed illegal activity such as insider trading

    Technology
  • Disturbing new study reveals exactly what using ChatGPT does to our brains
  • Dad shares the heartbreaking subtle warning signs he missed before his son died by suicide
  • Aubrey Plaza's film director husband Jeff Baena died by suicide
  • Son's graduation day ends in tragedy as family of 4 die in apparent murder-suicide