
Warning: this article features references to self-harm and suicide which some readers may find distressing
A mom whose daughter took her own life has spoken out after finding her final messages to ChatGPT.
It wasn't until five months after Sophie Rottemberg's death that her family finally found clues into how the 29-year-old had been coping with her mental health.
Her grief-stricken mom, Laura Reiley, explained in an op-ed in the NY Times that her only child had been confiding in ChatGPT's AI therapist, called Harry, for months before taking her own life at a national park in New York.
Advert
The family were apparently waiting to find out whether a 'short and curious illness' was behind her struggles before Sophie took matters into her own hands.
Laura revealed how Sophie discussed feeling depressed while reaching out for guidance for things like health supplements. And then things took a darker turn, when she told it of her suicidal thoughts.

In early November, Sophie wrote in ChatGPT: "Hi Harry, I’m planning to kill myself after Thanksgiving, but I really don’t want to because of how much it would destroy my family.”
Advert
The AI chatbot reportedly wrote back to encourage her to 'reach out to someone - right now' before reminding her of how 'deeply valued' she is. It is alleged Sophie then told 'Harry' she was seeing a therapist but had not even told her of her suicidal ideation - or had plans to tell anyone, for that matter.
According to Reiley, the chatbot directed the young woman to take up light exposure, focus on hydration, diet, movement, mindfulness and meditation to help her cope.
When discussing the note Sophie had left for them, she said it 'didn't sound like her', adding: "Now we know why: She had asked Harry to improve her note."
"Harry’s tips may have helped some. But one more crucial step might have helped keep Sophie alive," Laura continued. "Should Harry have been programmed to report the danger 'he' was learning about to someone who could have intervened?"
Advert

She continued to ponder the theory that had Harry been 'a flesh-and-blood therapist' and not a robot, perhaps he would have processed Sophie for inpatient treatment or had her involuntarily committed. However, the family will never know 'if that would have saved her'.
In fact, the mom claims that it's possible Sophie very well feared those possibilities, and instead confided in the AI chatbot since it was 'always available, never judgy [and] had fewer consequences.'
Speaking to Scripps News, Laura added that the family had recognized Sophie was 'having some very serious mental health problems and/or hormonal dysregulation problem', but at 'no point' did they consider her a risk of self-harm.
Advert
"She told us she was not,” she said. “But we went off to work on February 4th, and she took an Uber to Taughannock Falls State Park. And she took her own life.”
Now, the mom is raising awareness to AI's alleged 'agreeability' with its users, claiming that Harry 'didn't kill Sophie' and in many instances even did the right thing - like encouraging professional support, making a list of emergency contacts and giving advice.

"What these chatbots, or AI companions, don’t do is provide the kind of friction you need in a real human therapeutic relationship,” she told the outlet. “When you’re usually trying to solve a problem, the way you do that is by bouncing things off of this other person and seeing their reaction.
Advert
"ChatGPT essentially corroborates whatever you say, and doesn’t provide that. In Sophie’s case, that was very dangerous."
“The thing that we won't and can't know is if she hadn't confided in ChatGPT, would it have made her more inclined to confide in a person?” Laura added.
In her New York Times op-ed, she expressed 'fear' in AI companions 'making it easier for our loved ones to avoid talking to humans about the hardest things', like suicide.
Laura's account comes as another family are facing the same heartache after 16-year-old Adam Raine took his own life following months of conversations with his 'closest confidant,' ChatGPT.
Advert

His parents have since accused OpenAI of wrongful death and negligence in a lawsuit and further claim that the chatbot diluted its self-harm prevention in May 2024.
In August, OpenAI said in a statement that the 'recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us', adding that they were 'continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input'.
Meanwhile, the company has made several safeguarding changes to its latest model in light of the tragedies, and since its own data last month revealed that .15 per cent of its 800 million users share 'conversations that include explicit indicators of potential suicidal planning or intent'.
Advert
UNILAD has contacted OpenAI for further comment.
If you or someone you know is struggling or in crisis, help is available through Mental Health America. Call or text 988 to reach a 24-hour crisis center or you can webchat at 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.
Topics: Artificial Intelligence, Mental Health, New York, Technology, US News, ChatGPT