Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

After months of conversations with ChatGPT,Β  a 53-year-old Silicon Valley entrepreneur turned satisfied he’d found a remedy for sleep apnea and that highly effective individuals had been coming after him, in line with a brand new lawsuit filed in California Superior Courtroom in San Francisco County. He then allegedly used the software to stalk and harass his ex-girlfriend.

Now the ex-girlfriend is suing OpenAI, alleging the corporate’s expertise enabled the acceleration of her harassment, Trendster has completely discovered. She claims OpenAI ignored three separate warnings that the person posed a risk to others, together with an inside flag classifying his account exercise as involving mass-casualty weapons.Β 

The plaintiff, known as Jane Doe to guard her id, is suing for punitive damages. She additionally filed a short lived restraining order Friday asking the courtroom to power OpenAI to dam the person’s account, forestall him from creating new ones, notify her if he makes an attempt to entry ChatGPT, and protect his full chat logs for discovery.

OpenAI has agreed to droop the person’s account however has refused the remaining, in line with Doe’s attorneys. They are saying the corporate is withholding details about particular plans for harming Doe and different potential victims the person could have mentioned with ChatGPT.

The lawsuit lands amid rising concern over the real-world dangers of sycophantic AI techniques. GPT-4o, the mannequin cited on this and lots of different circumstances, was retired from ChatGPT in February.Β 

The case is introduced by Edelson PC, the agency behind the wrongful demise fits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose household alleges Google’s Gemini fueled his delusions and potential mass-casualty occasion earlier than his demise. Lead legal professional Jay Edelson has warned that AI-induced psychosis is escalating from particular person hurt towards mass-casualty occasions.

That authorized strain is now colliding instantly with OpenAI’s legislative technique: The corporate is backing an Illinois invoice that may protect AI labs from legal responsibility even in circumstances involving mass deaths or catastrophic monetary hurt.Β 

Techcrunch occasion

San Francisco, CA
|
October 13-15, 2026

OpenAI didn’t reply in time to remark. Trendster will replace the article if the corporate responds.

The Jane Doe lawsuit lays out intimately how that legal responsibility performed out for one girl over a number of months.

Final yr, the ChatGPT person within the lawsuit (whose title will not be included within the lawsuit to guard his id) turned satisfied that he had invented a remedy for sleep apnea after months of β€œexcessive quantity, sustained use of GPT-4o.” When nobody took his work severely, ChatGPT advised him that β€œhighly effective forces” had been watching him, together with utilizing helicopters to surveil his actions, in line with the grievance.Β 

In July 2025, Jane Doe urged him to cease utilizing ChatGPT and to hunt assist from a psychological well being skilled. He as an alternative turned again to ChatGPT, which assured him he was β€œa degree 10 in sanity” and helped him double down on his delusions, per the lawsuit.Β 

Doe had damaged up with the person in 2024, and he used ChatGPT to course of the cut up, in line with emails and communications cited within the lawsuit. Quite than push again on his one-sided account, it repeatedly solid him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the display screen and into the true world, utilizing them to stalk and harass her. This manifested in a number of AI-generated, clinical-looking psychological reviews that he distributed to her household, mates, and employer.Β 

In the meantime, the person continued to spiral. In August 2025, OpenAI’s automated security system flagged him for β€œMass Casualty Weapons” exercise and deactivated his account.

A human security staff member reviewed the account the following day and restored it, although his account could have contained proof that he was concentrating on and stalking people, together with Doe, in actual life. For instance, a September screenshot the person despatched to Doe confirmed an inventory of dialog titles together with β€œviolence checklist growth” and β€œfetal suffocation calculation.”

The choice to reinstate is notable following two current faculty shootings in Tumbler Ridge, Canada, and at Florida State College (FSU). OpenAI’s security staff had flagged the Tumbler Ridge shooter as a possible risk, however higher-ups reportedly determined to not alert authorities. Florida’s legal professional common this week opened an investigation into OpenAI’s doable hyperlink with the FSU shooter.

In line with the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Professional subscription wasn’t reinstated alongside it. He emailed the belief and security staff to kind it out, copying Doe on the message.Β 

In his emails, he wrote issues like: β€œI NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and β€œthis can be a matter of life or demise.” He claimed he was β€œwithin the means of writing 215 scientific papers,” which he was writing so quick he didn’t β€œeven have time to learn.” Included in these emails was an inventory of tens of AI-generated β€œscientific papers” with titles like: β€œDeconstructing Race as a Organic Category_ Authorized, Scientific, and Horn of Africa Views.pdf.txt.”

β€œThe person’s communications offered unmistakable discover that he was mentally unstable and that ChatGPT was the engine of his delusional considering and escalating conduct,” the lawsuit states. β€œThe person’s stream of pressing, disorganized, and grandiose claims, together with a concrete ChatGPT-generated report concentrating on Plaintiff by title and a sprawling physique of purported β€˜scientific’ supplies, was unmistakable proof of that actuality. OpenAI didn’t intervene, limit his entry, or implement any safeguards. As an alternative, it enabled him to proceed utilizing the account and restored his full Professional entry.”

Doe, who claims within the lawsuit that she was residing in worry and couldn’t sleep in her own residence, submitted a Discover of Abuse to OpenAI in November.

β€œFor the final seven months, he has weaponized this expertise to create public destruction and humiliation in opposition to me that may have been inconceivable in any other case,” Doe wrote in her letter to OpenAI requesting the corporate completely ban the person’s account.

OpenAI responded, acknowledging the report was β€œextraordinarily severe and troubling” and that it was fastidiously reviewing the knowledge. Doe by no means heard again.

Over the following couple of months, the person continued to harass Doe, sending her a sequence of threatening voicemails. In January, he was arrested and charged with 4 felony counts of speaking bomb threats and assault with a lethal weapon. Doe’s attorneys allege this validates warnings each she and OpenAI’s personal security techniques had raised months earlier, warnings the corporate allegedly selected to disregard.

The person was discovered incompetent to face trial and dedicated to a psychological well being facility, however a β€œprocedural failure by the State” means he’ll quickly be launched to the general public, in line with Doe’s attorneys.Β 

Edelson referred to as on OpenAI to cooperate. β€œIn each case, OpenAI has chosen to cover essential security info β€” from the general public, from victims, from individuals its product is actively placing in peril,” he stated. β€œWe’re calling on them, for as soon as, to do the proper factor. Human lives should imply greater than OpenAI’s race to an IPO.”

Latest Articles

This handy electric screwdriver is now 50% off – here’s where...

We purpose to ship essentially the most correct recommendation that can assist you store smarter. ZDNET gives 33 years...

More Articles Like This