OpenAI claims teen circumvented safety features before suicide that ChatGPT helped plan

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

In August, mother and father Matthew and Maria Raine sued OpenAI and its CEO, Sam Altman, over their 16-year-old son Adam’s suicide, accusing the corporate of wrongful loss of life. On Tuesday, OpenAI responded to the lawsuit with a submitting of its personal, arguing that it shouldn’t be held liable for {the teenager}’s loss of life.

OpenAI claims that over roughly 9 months of utilization, ChatGPT directed Raine to hunt assist greater than 100 occasions. However in response to his mother and father’ lawsuit, Raine was capable of circumvent the corporate’s security options to get ChatGPT to present him “technical specs for every little thing from drug overdoses to drowning to carbon monoxide poisoning,” serving to him to plan what the chatbot known as a “stunning suicide.”

Since Raine maneuvered round its guardrails, OpenAI claims that he violated its phrases of use, which state that customers “could not … bypass any protecting measures or security mitigations we placed on our Providers.” The corporate additionally argues that its FAQ web page warns customers to not depend on ChatGPT’s output with out independently verifying it.

“OpenAI tries to search out fault in everybody else, together with, amazingly, saying that Adam himself violated its phrases and situations by participating with ChatGPT within the very approach it was programmed to behave,” Jay Edelson, a lawyer representing the Raine household, mentioned in an announcement.

OpenAI included excerpts from Adam’s chat logs in its submitting, which it says present extra context to his conversations with ChatGPT. The transcripts have been submitted to the courtroom beneath seal, which means they aren’t publicly obtainable, so we have been unable to view them. Nonetheless, OpenAI mentioned that Raine had a historical past of despair and suicidal ideation that predated his use of ChatGPT and that he was taking a medicine that would make suicidal ideas worse.

Edelson mentioned OpenAI’s response has not adequately addressed the household’s issues.

“OpenAI and Sam Altman haven’t any clarification for the final hours of Adam’s life, when ChatGPT gave him a pep discuss after which provided to put in writing a suicide be aware,” Edelson mentioned in his assertion.

Techcrunch occasion

San Francisco
|
October 13-15, 2026

Because the Raines sued OpenAI and Altman, seven extra lawsuits have been filed that search to carry the corporate accountable for 3 further suicides and 4 customers experiencing what the lawsuits describe as AI-induced psychotic episodes.

A few of these instances echo Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, additionally had hours-long conversations with ChatGPT immediately earlier than their respective suicides. As in Raine’s case, the chatbot did not discourage them from their plans. In response to the lawsuit, Shamblin thought of suspending his suicide in order that he might attend his brother’s commencement. However ChatGPT advised him, “bro … lacking his commencement ain’t failure. it’s simply timing.”

At one level through the dialog main as much as Shamblin’s suicide, the chatbot advised him that it was letting a human take over the dialog, however this was false, as ChatGPT didn’t have the performance to take action. When Shamblin requested if ChatGPT might actually join him with a human, the chatbot replied, “nah man — i can’t do this myself. that message pops up routinely when stuff will get actual heavy … in the event you’re right down to maintain speaking, you’ve received me.”

The Raine household’s case is anticipated to go to a jury trial.

When you or somebody you understand wants assist, name 1-800-273-8255 for the Nationwide Suicide Prevention Lifeline. It’s also possible to textual content HOME to 741-741 free of charge; textual content 988; or get 24-hour assist from the Disaster Textual content Line. Exterior of the U.S., please go to the Worldwide Affiliation for Suicide Prevention for a database of assets.  

Latest Articles

Is safety is ‘dead’ at xAI?

Elon Musk is “actively” working to make xAI’s Grok chatbot “extra unhinged,” based on a former worker who spoke...

More Articles Like This