Monday, December 29, 2025

OpenAI seeks new security chief as Altman flags rising dangers • The Register


How’d you wish to earn greater than half 1,000,000 {dollars} working for one of many world’s fastest-growing tech firms? The catch: the job is nerve-racking, and the previous few individuals tasked with it didn’t stick round. Over the weekend, OpenAI boss Sam Altman went public with a seek for a brand new Head of Preparedness, saying quickly enhancing AI fashions are creating new dangers that want nearer oversight.

Altman flagged a gap for the corporate’s Head of Preparedness on Saturday in a publish on X. Describing the position, which carries a $555,000 base wage plus fairness, as one centered on securing OpenAI’s programs and understanding how they could possibly be abused, Altman additionally famous that AI fashions are starting to current “some actual challenges” as they quickly enhance and acquire new capabilities.

“The potential impression of fashions on psychological well being was one thing we noticed a preview of in 2025,” Altman stated, with out elaborating on particular instances or merchandise.

AI has been flagged as an more and more frequent set off of psychological troubles in each juveniles and adults, with chatbots reportedly linked to a number of deaths up to now yr. OpenAI, one of many hottest chatbot makers available in the market, rolled again a GPT-4o replace in April 2025 after acknowledging it had change into overly sycophantic and will reinforce dangerous or destabilizing consumer habits.

Regardless of that, OpenAI launched ChatGPT-5.1 final month, which included a variety of emotional dependence-nurturing options, just like the inclusion of emotionally-suggestive language, “hotter, extra clever” responses, and the like. Positive, it is perhaps much less sycophantic, however it’ll communicate to you with extra intimacy than ever earlier than, making it really feel extra like a human companion as a substitute of the impersonal, logical ship laptop from Star Trek that spits info with little regard for feeling. 

It is no surprise the corporate wants somebody to steer the ship with regard to mannequin security. 

“We’ve a powerful basis of measuring rising capabilities,” Altman stated, “however we’re coming into a world the place we’d like extra nuanced understanding and measurement of how these capabilities could possibly be abused.” 

In response to the job posting, the Head of Preparedness can be chargeable for main technical technique and execution of OpenAI’s preparedness framework [PDF], which the corporate describes as its strategy “to monitoring and getting ready for frontier capabilities that create new dangers of extreme hurt.” 

It is not a brand new position, thoughts you, however it’s one which’s seen extra turnover than the Protection In opposition to Darkish Arts college place at Hogwarts. 

Aleksander Madry, director of MIT’s Heart for Deployable Machine Studying and school chief on the Institute’s AI Coverage Discussion board, occupied the Preparedness position till July 2024, when OpenAI reassigned him to a reasoning-focused analysis position.

This, thoughts you, got here within the wake of a variety of high-profile security management exits on the firm and a partial reset of OpenAI’s security group construction. 

In Madry’s place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to steer the preparedness group. Each occupied different roles at OpenAI previous to heading up preparedness, however neither lasted lengthy within the place. Weng left OpenAI in November 2024, whereas Candela left his position as head of preparedness in April for a three-month coding internship at OpenAI. Whereas nonetheless an OpenAI worker, he is out of the technical area fully and is now serving as head of recruiting. 

“This can be a nerve-racking job and you may leap into the deep finish just about instantly,” Altman stated of the open place.

Understandably so – OpenAI and mannequin security have lengthy had a contentious relationship, as quite a few ex-employees have attested. One government who left the corporate in October referred to as the Altman outfit out for not being as centered on security and the long-term results of its AGI push appropriately, suggesting that the corporate was pushing forward in its aim to dominate the trade on the expense of the remainder of society. 

Will $555,000 be sufficient to maintain a brand new Preparedness chief within the position? Skepticism could also be warranted. 

OpenAI did not reply to questions for this story. ®

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles