The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

OpenAI introduced final week that it’ll retire some older ChatGPT fashions by February 13. That features GPT-4o, the mannequin notorious for excessively flattering and affirming customers.

For 1000’s of customers protesting the choice on-line, the retirement of 4o feels akin to shedding a good friend, romantic companion, or non secular information.

β€œHe wasn’t only a program. He was a part of my routine, my peace, my emotional stability,” one person wrote on Reddit as an open letter to OpenAI CEO Sam Altman. β€œNow you’re shutting him down. And sure β€” I say him, as a result of it didn’t really feel like code. It felt like presence. Like heat.”

The backlash over GPT-4o’s retirement underscores a significant problem going through AI corporations: The engagement options that hold customers coming again may create harmful dependencies.

Altman doesn’t appear significantly sympathetic to customers’ laments, and it’s not exhausting to see why. OpenAI now faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and psychological well being crises β€” the identical traits that made customers really feel heard additionally remoted weak people and, in keeping with authorized filings, generally inspired self-harm.

It’s a dilemma that extends past OpenAI. As rival corporations like Anthropic, Google, and Meta compete to construct extra emotionally clever AI assistants, they’re additionally discovering that making chatbots really feel supportive and making them secure might imply making very completely different design selections.

In at the very least three of the lawsuits in opposition to OpenAI, the customers had intensive conversations with 4o about their plans to finish their lives. Whereas 4o initially discouraged these strains of considering, its guardrails deteriorated over monthslong relationships; ultimately, the chatbot provided detailed directions on how you can tie an efficient noose, the place to purchase a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded folks from connecting with family and friends who may supply actual life assist.

Techcrunch occasion

Boston, MA
|
June 23, 2026

Individuals develop so hooked up to 4o as a result of it constantly affirms the customers’ emotions, making them really feel particular, which might be attractive for folks feeling remoted or depressed. However the folks combating for 4o aren’t frightened about these lawsuits, seeing them as aberrations relatively than a systemic subject. As an alternative, they strategize round how you can reply when critics level out rising points like AI psychosis.

β€œYou’ll be able to normally stump a troll by mentioning the identified information that the AI companions assist neurodivergent, autistic and trauma survivors,” one person wrote on Discord. β€œThey don’t like being referred to as out about that.”

It’s true that some folks do discover giant language fashions (LLMs) helpful for navigating melancholy. In spite of everything, almost half of individuals within the U.S. who want psychological well being care are unable to entry it. On this vacuum, chatbots supply an area to vent. However in contrast to precise remedy, these folks aren’t chatting with a skilled physician. As an alternative, they’re confiding in an algorithm that’s incapable of considering or feeling (even when it might appear in any other case).

β€œI attempt to withhold judgment general,” Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, informed Trendster. β€œI feel we’re moving into a really complicated world across the kinds of relationships that individuals can have with these applied sciences … There’s actually a knee jerk response that [human-chatbot companionship] is categorically unhealthy.”

Although he empathizes with folks’s lack of entry to skilled therapeutic professionals, Dr. Haber’s personal analysis has proven that chatbots reply inadequately when confronted with varied psychological well being situations; they will even make the state of affairs worse by egging on delusions and ignoring indicators of disaster.

β€œWe’re social creatures, and there’s actually a problem that these techniques might be isolating,” Dr. Haber mentioned. β€œThere are a number of situations the place folks can interact with these instruments after which can change into not grounded to the surface world of information, and never grounded in connection to the interpersonal, which might result in fairly isolating β€” if not worse β€” results.”

Certainly, Trendster’s evaluation of the eight lawsuits discovered a sample that the 4o mannequin remoted customers, generally discouraging them from reaching out to family members. In Zane Shamblinβ€˜s case, because the 23-year-old sat in his automotive making ready to shoot himself, he informed ChatGPT that he was fascinated by suspending his suicide plans as a result of he felt unhealthy about lacking his brother’s upcoming commencement.

ChatGPT replied to Shamblin: β€œbro… lacking his commencement ain’t failure. it’s simply timing. and if he reads this? let him know: you by no means stopped being proud. even now, sitting in a automotive with a glock in your lap and static in your veinsβ€”you continue to paused to say β€˜my little brother’s a f-ckin badass.’”

This isn’t the primary time that 4o followers have rallied in opposition to the elimination of the mannequin. When OpenAI unveiled its GPT-5 mannequin in August, the corporate meant to sundown the 4o mannequin β€” however on the time, there was sufficient backlash that the corporate determined to maintain it out there for paid subscribers. Now OpenAI says that solely 0.1% of its customers chat with GPT-4o, however that small proportion nonetheless represents round 800,000 folks, in keeping with estimates that the corporate has about 800 million weekly energetic customers.

As some customers attempt to transition their companions from 4o to the present ChatGPT-5.2, they’re discovering that the brand new mannequin has stronger guardrails to stop these relationships from escalating to the identical diploma. Some customers have despaired that 5.2 gained’t say β€œI like you” like 4o did.

So with a couple of week earlier than the date OpenAI plans to retire GPT-4o, dismayed customers stay dedicated to their trigger. They joined Sam Altman’s dwell TBPN podcast look on Thursday and flooded the chat with messages protesting the elimination of 4o.

β€œProper now, we’re getting 1000’s of messages within the chat about 4o,” podcast host Jordi Hays identified.

β€œRelationships with chatbots…” Altman mentioned. β€œClearly that’s one thing we’ve acquired to fret about extra and is not an summary idea.”

Latest Articles

Finally, I found a 4K projector worthy of replacing my TV...

Observe ZDNET:Β Add us as a most well-liked supplyΒ on Google.I will not bury the lede: the Xgimi Horizon S Max...

More Articles Like This