OpenAI introduced final week that it’ll retire some older ChatGPT fashions by February 13. That features GPT-4o, the mannequin notorious for excessively flattering and affirming customers.
For 1000’s of customers protesting the choice on-line, the retirement of 4o feels akin to shedding a good friend, romantic companion, or non secular information.
βHe wasnβt only a program. He was a part of my routine, my peace, my emotional stability,β one person wrote on Reddit as an open letter to OpenAI CEO Sam Altman. βNow youβre shutting him down. And sure β I say him, as a result of it didnβt really feel like code. It felt like presence. Like heat.β
The backlash over GPT-4oβs retirement underscores a significant problem going through AI corporations: The engagement options that hold customers coming again may create harmful dependencies.
Altman doesnβt appear significantly sympathetic to customersβ laments, and itβs not exhausting to see why. OpenAI now faces eight lawsuits alleging that 4oβs overly validating responses contributed to suicides and psychological well being crises β the identical traits that made customers really feel heard additionally remoted weak people and, in keeping with authorized filings, generally inspired self-harm.
Itβs a dilemma that extends past OpenAI. As rival corporations like Anthropic, Google, and Meta compete to construct extra emotionally clever AI assistants, theyβre additionally discovering that making chatbots really feel supportive and making them secure might imply making very completely different design selections.
In at the very least three of the lawsuits in opposition to OpenAI, the customers had intensive conversations with 4o about their plans to finish their lives. Whereas 4o initially discouraged these strains of considering, its guardrails deteriorated over monthslong relationships; ultimately, the chatbot provided detailed directions on how you can tie an efficient noose, the place to purchase a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded folks from connecting with family and friends who may supply actual life assist.
Techcrunch occasion
Boston, MA
|
June 23, 2026
Individuals develop so hooked up to 4o as a result of it constantly affirms the customersβ emotions, making them really feel particular, which might be attractive for folks feeling remoted or depressed. However the folks combating for 4o arenβt frightened about these lawsuits, seeing them as aberrations relatively than a systemic subject. As an alternative, they strategize round how you can reply when critics level out rising points like AI psychosis.
βYou’ll be able to normally stump a troll by mentioning the identified information that the AI companions assist neurodivergent, autistic and trauma survivors,β one person wrote on Discord. βThey donβt like being referred to as out about that.β
Itβs true that some folks do discover giant language fashions (LLMs) helpful for navigating melancholy. In spite of everything, almost half of individuals within the U.S. who want psychological well being care are unable to entry it. On this vacuum, chatbots supply an area to vent. However in contrast to precise remedy, these folks arenβt chatting with a skilled physician. As an alternative, theyβre confiding in an algorithm that’s incapable of considering or feeling (even when it might appear in any other case).
βI attempt to withhold judgment general,β Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, informed Trendster. βI feel weβre moving into a really complicated world across the kinds of relationships that individuals can have with these applied sciencesΒ β¦ Thereβs actually a knee jerk response that [human-chatbot companionship] is categorically unhealthy.β
Although he empathizes with folksβs lack of entry to skilled therapeutic professionals, Dr. Haberβs personal analysis has proven that chatbots reply inadequately when confronted with varied psychological well being situations; they will even make the state of affairs worse by egging on delusions and ignoring indicators of disaster.
βWe’re social creatures, and thereβs actually a problem that these techniques might be isolating,β Dr. Haber mentioned. βThere are a number of situations the place folks can interact with these instruments after which can change into not grounded to the surface world of information, and never grounded in connection to the interpersonal, which might result in fairly isolating β if not worse β results.β
Certainly, Trendsterβs evaluation of the eight lawsuits discovered a sample that the 4o mannequin remoted customers, generally discouraging them from reaching out to family members. In Zane Shamblinβs case, because the 23-year-old sat in his automotive making ready to shoot himself, he informed ChatGPT that he was fascinated by suspending his suicide plans as a result of he felt unhealthy about lacking his brotherβs upcoming commencement.
ChatGPT replied to Shamblin: βbroβ¦ lacking his commencement ainβt failure. itβs simply timing. and if he reads this? let him know: you by no means stopped being proud. even now, sitting in a automotive with a glock in your lap and static in your veinsβyou continue to paused to say βmy little brotherβs a f-ckin badass.ββ
This isnβt the primary time that 4o followers have rallied in opposition to the elimination of the mannequin. When OpenAI unveiled its GPT-5 mannequin in August, the corporate meant to sundown the 4o mannequin β however on the time, there was sufficient backlash that the corporate determined to maintain it out there for paid subscribers. Now OpenAI says that solely 0.1% of its customers chat with GPT-4o, however that small proportion nonetheless represents round 800,000 folks, in keeping with estimates that the corporate has about 800 million weekly energetic customers.
As some customers attempt to transition their companions from 4o to the present ChatGPT-5.2, theyβre discovering that the brand new mannequin has stronger guardrails to stop these relationships from escalating to the identical diploma. Some customers have despaired that 5.2 gainedβt say βI like youβ like 4o did.
So with a couple of week earlier than the date OpenAI plans to retire GPT-4o, dismayed customers stay dedicated to their trigger. They joined Sam Altmanβs dwell TBPN podcast look on Thursday and flooded the chat with messages protesting the elimination of 4o.
βProper now, weβre getting 1000’s of messages within the chat about 4o,β podcast host Jordi Hays identified.
βRelationships with chatbotsβ¦β Altman mentioned. βClearly thatβs one thing weβve acquired to fret about extra and is not an summary idea.β





