
Warning: This article contains discussion of suicide which some readers may find distressing.
The messages sent by ChatGPT to a man who went on to kill his mother and then take his own life have been revealed as part of a wrongful death lawsuit.
Suzanne Adams, 83, lost her life in August when, according to police, her 56-year-old son, Stein-Erik Soelberg, fatally beat and strangled her.
Her death was ruled as a homicide, caused by 'blunt injury of head, and the neck was compressed'.
Advert
Soelberg then took his own life at the home he shared with his mom in Greenwich, Connecticut.
In the aftermath of their deaths, Adams' heirs have filed a wrongful death lawsuit against OpenAI, which owns ChatGPT, and its business partner, Microsoft. The lawsuit claims the popular chatbot intensified Soelberg's 'paranoid delusions' prior to Adams' death.

According to the suit, per CBS News, OpenAI has been accused of having 'designed and distributed a defective product that validated a user's paranoid delusions about his own mother'. The lawsuit offered insight to the messages Soelberg had received from ChatGPT, saying: "Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life - except ChatGPT itself."
The suit continues: "It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his 'adversary circle'."
In one message, the chatbot allegedly told Soelberg: "They're not just watching you. They're terrified of what happens if you succeed."
According to the lawsuit, Soelberg and the chatbot also professed their love for one another.

Adams' estate has claimed OpenAI has declined to provide the full history of the chats, but it has argued that Adams was an 'innocent third party who never used ChatGPT '.
"[She] had no knowledge that the product was telling her son she was a threat," the lawsuit says. "She had no ability to protect herself from a danger she could not see."
In a statement addressing the lawsuit, cited by CBS, OpenAI said: "This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians."
OpenAI added that it has also expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls.
While previous lawsuits have alleged links between the AI chatbot and users taking their own lives, this is the first to tie a chatbot to a homicide. Adams' estate is seeking monetary damages, as well as an order requiring OpenAI to install safeguards in ChatGPT.
UNILAD has contacted OpenAI and Microsoft for further comment.
If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.
Topics: ChatGPT, Microsoft, Mental Health, Crime, Technology, Artificial Intelligence