Openai is an internal system used to assess the safety of AI models and determine the protection needed during development and deployment. In the update, Openai said that if competing AI labs release “high risk” systems without similar protection, they could “adjust” safety requirements.
The change reflects an increase in competitive pressure for commercial AI developers to deploy their models quickly. Openai has been accused of lowering safety standards in favor of faster releases and failing to provide a timely report detailing safety tests. Last week, 12 Openai employees submitted a brief summary of Elon Musk’s case against Openai, claiming that if they complete a planned company restructuring, the company would be encouraged to safely cut more corners.
Perhaps anticipating criticism, Openai insists that it will not underestimate adjustments to these policies and that it will maintain safeguards at a “level of protection.”
“If another Frontier AI developer releases a high-risk system without comparable safeguards, the requirements can be adjusted,” OpenAI wrote in a blog post published Tuesday afternoon. “However, we will first rigorously confirm that the risky situation has actually changed, publicly acknowledge that adjustments are being made, assess that adjustments do not significantly increase the overall risk of serious harm, and still keep the protections more protective.”
The updated preparation framework also reveals that Openai relies heavily on automated evaluations. The company says it has not completely abandoned human-driven testing, but it has “built a “growth auto-rating suite” that can keep up with (a) faster (release) cadence.
Some reports contradict this. According to the Financial Times, Openai gave testers for less than a week to safety checks on key models in the future. This is a compressed timeline compared to previous releases. Publication sources also argue that many of Openai’s safety tests are being carried out on previous versions of the model rather than on the commonly released version.
The statement challenges the notion that Openai is compromising safety.
Openai quietly reduces its commitment to safety.
Omitted from the list of changes to Openai Preparation Framework:
Finetuned Models https://t.co/otmeiatsjs safety testing is no longer needed
– Steven Adler (@sjgadler) April 15, 2025
Other changes to Openai’s framework concern how models are categorized according to risk, including models that can hide capabilities, avoid safeguards, prevent closure, and even self-replica. Openai says it focuses on whether the model meets one of two thresholds. It is a “high” function or “important” function.
The former definition of Openai is a model that can “amplify existing pathways into serious harm.” The latter is a model that, according to the company, “introduces new unprecedented pathways into serious harm.”
“A target system that reaches high capabilities requires safeguards that adequately minimize the risk of serious associated harm before it can be deployed,” OpenAI wrote in a blog post. “Systems reach critical functionality also need safeguards that sufficiently minimize the associated risks during development.”
The update is Openai, first created against the preparatory framework since 2023.