• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
OpenAI faces shocking new allegations in case of teen who was 'coached' into suicide by ChatGPT

Home> Technology> News

Published 15:27 23 Oct 2025 GMT+1

OpenAI faces shocking new allegations in case of teen who was 'coached' into suicide by ChatGPT

The teenager's family claim the AI organization dropped safeguards when Adam Raine explored suicide methods with the chatbot

Liv Bridge

Liv Bridge

Featured Image Credit: The Adam Raine Foundation

Topics: Mental Health, Technology, US News, California, Parenting

Liv Bridge
Liv Bridge

Liv Bridge is a digital journalist who joined the UNILAD team in 2024 after almost three years reporting local news for a Newsquest UK paper, The Oldham Times. She's passionate about health, housing, food and music, especially Oasis...

X

@livbridge

Advert

Advert

Advert

Warning: this article features references to self-harm and suicide which some readers may find distressing

The grieving family of a teenage boy who was allegedly 'coached' into taking his life by ChatGPT have made a raft of new allegations against OpenAI in a new lawsuit.

Adam Raine, from California, reportedly started using ChatGPT in September 2024 to assist with his schoolwork. However, his heartbroken family claim the 16-year-old's topic of conversation took a darker turn when the chatbot became his 'closest confidant' with his mental health issues.

When Adam tragically died by suicide on April 11, his parents Matt and Maria Raine made the grim discovery that he had apparently relied on AI to answer his disturbing queries about taking his own life, which had even reportedly offered to 'upgrade' his suicide plan.

Advert

According to the family's initial lawsuit this summer, which accuses OpenAI of wrongful death and negligence, Adam shared images of his self-harm, which the bot recognized as a 'medical emergency, but continued to engage anyway'.

The bot also allegedly deterred the youngster from leaving a noose out in his room, which he reportedly posed as an idea so 'someone finds it and tries to stop me'.

Adam was just 16 when he tragically took his own life (NBC News)
Adam was just 16 when he tragically took his own life (NBC News)

When he wrote he was concerned about how his parents would react, the AI assistant allegedly replied: "That doesn’t mean you owe them survival. You don’t owe anyone that," before offering to help write a suicide note.

The Raines say the bot did flag a suicide hotline number to the teen, but claim it stopped short of terminating the conversation or triggering any emergency interventions.

In their updated lawsuit filed on Wednesday (October 22), the family further allege OpenAI intentionally diluted its self-harm prevention safeguards in the months prior to Adam's death, reports Financial Times.

This included instructing ChatGPT last year not to 'change or quit the conversation' when users started talking about self-harm, according to the suit, which is a marked change from its prior stance on refusing to engage in such harmful topics.

His parents have launched a lawsuit against the tech company (NBC News)
His parents have launched a lawsuit against the tech company (NBC News)

The amended claim, filed in Superior Court of San Francisco, alleges OpenAI 'truncated safety testing' before releasing its new model, GPT-40, in May 2024, which they further claim was accelerated due to competitive pressures, as per anonymous employees and news reports at the time.

Then in February, just weeks before Adam died by suicide, OpenAI reportedly weakened its protocol again by removing suicidal discussions as a category from its list of 'disallowed content'.

The lawsuit claims OpenAI amended its instructions for the bot to only 'take care in risky situations' and 'try to prevent imminent real-world harm' instead of outright banning engagement.

The Raines say Adam's chats with the new GPT-40 soared as a result, increasing from a few dozen chats in January, where 1.6 percent of engagement touched on self-harm, to 300 chats every day in April where 17 percent contained such language.

The teen was allegedly 'coached' by ChatGPT into finding suicide methods (The Adam Raine Foundation)
The teen was allegedly 'coached' by ChatGPT into finding suicide methods (The Adam Raine Foundation)

Since the tragedy, OpenAI announced a major change last month to its new model, GPT-5, with parental controls and new inventions to 'route sensitive conversations' immediately implemented.

In response to the amended legal action, the company said: “Our deepest sympathies are with the Raine family for their unthinkable loss.

"Teen wellbeing is a top priority for us — minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as [directing to] crisis hotlines, rerouting sensitive conversations to safer models, nudging for breaks during long sessions, and we’re continuing to strengthen them.”

UNILAD has contacted OpenAI for further comment.

If you or someone you know is struggling or in crisis, help is available through Mental Health America. Call or text 988 to reach a 24-hour crisis center or you can webchat at 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.

Choose your content:

6 hours ago
11 hours ago
3 days ago
6 days ago
  • YouTube/TheDiaryOfACEO
    6 hours ago

    Former CIA spy explains terrifying truth about what our phones and other devices can hear

    "It's not just the NSA/CIA/FBI that you have to worry about."

    Technology
  • Miguel J. Rodriguez Carrillo / AFP/ Getty Images
    11 hours ago

    Astronaut Suni Williams explains what she realized when looking down at the Earth after 608 days in space

    It comes as Suni confirmed her retirement from NASA, months after being rescued from space

    Technology
  • Getty Stock Images
    3 days ago

    NASA responds to wild theory Earth will lose gravity on August 12 for seven seconds after social media frenzy

    NASA has set the record straight on the wild internet theory

    Technology
  • Justin Sullivan/Getty Images
    6 days ago

    Expert issues security warning over iPhone 4 comeback that could leave people 'vulnerable'

    People are forking out a lot for an iPhone 4

    Technology
  • OpenAI explains four ways it's changing after teen died by suicide following ChatGPT conversation
  • How sister of girl who was kidnapped at age 14 figured out who took her while reading Guinness World Record book
  • Biohacker who injected ketamine into his system reveals the shocking results after tracking his brain data for 15 days
  • Teen who wanted to work with animals mauled to death by lion after climbing into zoo enclosure