After months of conversations with ChatGPT,Β a 53-year-old Silicon Valley entrepreneur turned satisfied heβd found a remedy for sleep apnea and that highly effective individuals had been coming after him, in line with a brand new lawsuit filed in California Superior Courtroom in San Francisco County. He then allegedly used the software to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing OpenAI, alleging the corporateβs expertise enabled the acceleration of her harassment, Trendster has completely discovered. She claims OpenAI ignored three separate warnings that the person posed a risk to others, together with an inside flag classifying his account exercise as involving mass-casualty weapons.Β
The plaintiff, known as Jane Doe to guard her id, is suing for punitive damages. She additionally filed a short lived restraining order Friday asking the courtroom to power OpenAI to dam the personβs account, forestall him from creating new ones, notify her if he makes an attempt to entry ChatGPT, and protect his full chat logs for discovery.
OpenAI has agreed to droop the personβs account however has refused the remaining, in line with Doeβs attorneys. They are saying the corporate is withholding details about particular plans for harming Doe and different potential victims the person could have mentioned with ChatGPT.
The lawsuit lands amid rising concern over the real-world dangers of sycophantic AI techniques. GPT-4o, the mannequin cited on this and lots of different circumstances, was retired from ChatGPT in February.Β
The case is introduced by Edelson PC, the agency behind the wrongful demise fits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose household alleges Googleβs Gemini fueled his delusions and potential mass-casualty occasion earlier than his demise. Lead legal professional Jay Edelson has warned that AI-induced psychosis is escalating from particular person hurt towards mass-casualty occasions.
That authorized strain is now colliding instantly with OpenAIβs legislative technique: The corporate is backing an Illinois invoice that may protect AI labs from legal responsibility even in circumstances involving mass deaths or catastrophic monetary hurt.Β
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
OpenAI didn’t reply in time to remark. Trendster will replace the article if the corporate responds.
The Jane Doe lawsuit lays out intimately how that legal responsibility performed out for one girl over a number of months.
Final yr, the ChatGPT person within the lawsuit (whose title will not be included within the lawsuit to guard his id) turned satisfied that he had invented a remedy for sleep apnea after months of βexcessive quantity, sustained use of GPT-4o.β When nobody took his work severely, ChatGPT advised him that βhighly effective forcesβ had been watching him, together with utilizing helicopters to surveil his actions, in line with the grievance.Β
In July 2025, Jane Doe urged him to cease utilizing ChatGPT and to hunt assist from a psychological well being skilled. He as an alternative turned again to ChatGPT, which assured him he was βa degree 10 in sanityβ and helped him double down on his delusions, per the lawsuit.Β
Doe had damaged up with the person in 2024, and he used ChatGPT to course of the cut up, in line with emails and communications cited within the lawsuit. Quite than push again on his one-sided account, it repeatedly solid him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the display screen and into the true world, utilizing them to stalk and harass her. This manifested in a number of AI-generated, clinical-looking psychological reviews that he distributed to her household, mates, and employer.Β
In the meantime, the person continued to spiral. In August 2025, OpenAIβs automated security system flagged him for βMass Casualty Weaponsβ exercise and deactivated his account.
A human security staff member reviewed the account the following day and restored it, although his account could have contained proof that he was concentrating on and stalking people, together with Doe, in actual life. For instance, a September screenshot the person despatched to Doe confirmed an inventory of dialog titles together with βviolence checklist growthβ and βfetal suffocation calculation.β
The choice to reinstate is notable following two current faculty shootings in Tumbler Ridge, Canada, and at Florida State College (FSU). OpenAIβs security staff had flagged the Tumbler Ridge shooter as a possible risk, however higher-ups reportedly determined to not alert authorities. Floridaβs legal professional common this week opened an investigation into OpenAIβs doable hyperlink with the FSU shooter.
In line with the Jane Doe lawsuit, when OpenAI restored her stalkerβs account, his Professional subscription wasnβt reinstated alongside it. He emailed the belief and security staff to kind it out, copying Doe on the message.Β
In his emails, he wrote issues like: βI NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!β and βthis can be a matter of life or demise.β He claimed he was βwithin the means of writing 215 scientific papers,β which he was writing so quick he didnβt βeven have time to learn.β Included in these emails was an inventory of tens of AI-generated βscientific papersβ with titles like: βDeconstructing Race as a Organic Category_ Authorized, Scientific, and Horn of Africa Views.pdf.txt.β
βThe personβs communications offered unmistakable discover that he was mentally unstable and that ChatGPT was the engine of his delusional considering and escalating conduct,β the lawsuit states. βThe personβs stream of pressing, disorganized, and grandiose claims, together with a concrete ChatGPT-generated report concentrating on Plaintiff by title and a sprawling physique of purported βscientificβ supplies, was unmistakable proof of that actuality. OpenAI didn’t intervene, limit his entry, or implement any safeguards. As an alternative, it enabled him to proceed utilizing the account and restored his full Professional entry.β
Doe, who claims within the lawsuit that she was residing in worry and couldn’t sleep in her own residence, submitted a Discover of Abuse to OpenAI in November.
βFor the final seven months, he has weaponized this expertise to create public destruction and humiliation in opposition to me that may have been inconceivable in any other case,β Doe wrote in her letter to OpenAI requesting the corporate completely ban the personβs account.
OpenAI responded, acknowledging the report was βextraordinarily severe and troublingβ and that it was fastidiously reviewing the knowledge. Doe by no means heard again.
Over the following couple of months, the person continued to harass Doe, sending her a sequence of threatening voicemails. In January, he was arrested and charged with 4 felony counts of speaking bomb threats and assault with a lethal weapon. Doeβs attorneys allege this validates warnings each she and OpenAIβs personal security techniques had raised months earlier, warnings the corporate allegedly selected to disregard.
The person was discovered incompetent to face trial and dedicated to a psychological well being facility, however a βprocedural failure by the Stateβ means he’ll quickly be launched to the general public, in line with Doeβs attorneys.Β
Edelson referred to as on OpenAI to cooperate. βIn each case, OpenAI has chosen to cover essential security info β from the general public, from victims, from individuals its product is actively placing in peril,β he stated. βWeβre calling on them, for as soon as, to do the proper factor. Human lives should imply greater than OpenAIβs race to an IPO.β





