OpenAI is looking for a new Head of Preparedness

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

OpenAI is trying to rent a brand new government accountable for finding out rising AI-related dangers in areas starting from laptop safety to psychological well being.

In a submit on X, CEO Sam Altman acknowledged that AI fashions are “beginning to current some actual challenges,” together with the “potential impression of fashions on psychological well being,” in addition to fashions which are “so good at laptop safety they’re starting to seek out crucial vulnerabilities.”

“If you wish to assist the world work out find out how to allow cybersecurity defenders with leading edge capabilities whereas making certain attackers can’t use them for hurt, ideally by making all methods safer, and equally for a way we launch organic capabilities and even acquire confidence within the security of working methods that may self-improve, please think about making use of,” Altman wrote.

OpenAI’s itemizing for the Head of Preparedness function describes the job as one which’s accountable for executing the corporate’s preparedness framework, “our framework explaining OpenAI’s strategy to monitoring and getting ready for frontier capabilities that create new dangers of extreme hurt.”

The corporate first introduced the creation of a preparedness workforce in 2023, saying it could be accountable for finding out potential “catastrophic dangers,” whether or not they had been extra speedy, like phishing assaults, or extra speculative, equivalent to nuclear threats.

Lower than a yr later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job centered on AI reasoning. Different security executives at OpenAI have additionally left the corporate or taken on new roles outdoors of preparedness and security.

The corporate additionally lately up to date its Preparedness Framework, stating that it’d “alter” its security necessities if a competing AI lab releases a “high-risk” mannequin with out comparable protections.

Techcrunch occasion

San Francisco
|
October 13-15, 2026

As Altman alluded to in his submit, generative AI chatbots have confronted rising scrutiny round their impression on psychological well being. Current lawsuits allege that OpenAI’s ChatGPT bolstered customers’ delusions, elevated their social isolation, and even led some to suicide. (The corporate mentioned it continues working to enhance ChatGPT’s means to acknowledge indicators of emotional misery and to attach customers to real-world help.)

Latest Articles

India doubles down on state-backed venture capital, approving $1.1B fund

India has cleared a $1.1 billion state-backed enterprise capital program that may channel authorities cash into startups by means...

More Articles Like This