The new report suggests that a California-based policy group co-led by AI pioneer FEI-FEI LI should consider AI risks that have been “still not observed worldwide” when creating AI regulatory policies.
The 41-page interim report released Tuesday comes from the AI Frontier Model’s Joint Policy Working Group. This is an effort hosted by Gov. Gavin Newsom following California’s controversial rejection of AI Safety Bill SB 1047. SB1047 discovered that SB 1047 missed the mark.
In the report, Li, along with co-authors Jennifer Chase (UC Berkeley Computing Dean University) and Mariano Florantino Queral (Carnegie Donation for President of International Peace), supports laws that increase transparency into what frontier AI labs, such as Open Eye, are buildings. Industry stakeholders from across the ideological spectrum reviewed the report prior to publication, including solid AI safety advocates such as Turing Award winner Joshu Avengio and DataBricks co-founder Ion Stoica.
The new risks posed by AI systems may require laws that enforce AI model developers publicly report safety testing, data acquisition practices and security measures, according to the report. The report also advocates an increase in whistleblower protection for AI companies’ employees and contractors, as well as an increase in standards for third-party valuation of these metrics and corporate policies.
Li et al. Write that there is “conclusive level of evidence” for the possibility of AI to help carry out cyberattacks, create biological weapons, or pose other “extreme” threats. However, they also argue that AI policies should not only address current risks, but also predict future outcomes that may occur without adequate protection.
“We don’t need to observe, for example, a nuclear weapon (explosion) and ensure that it can, and will cause, widespread harm,” the report says. “If the person who speculates about the most extreme risk is right, and if we are unsure whether we will, then the interests and costs for omissions in Frontier AI at this moment are very high.”
This report recommends two extended strategies to increase transparency in AI model development: trust validates. AI model developers and their employees should provide a means to report on areas of public interest, the report says that they should submit a test request for third-party verification, such as internal safety testing.
The final version, released in June 2025, does not support any particular law, but has been well received by experts on both sides of the AI policy making debate.
Deanball, an AI-centric researcher at George Mason University who was critical of SB 1047, said in a post on X that the report is a promising step in California’s AI safety regulations. It’s also a victory for AI safety advocates, according to California Sen. Scott Wiener, who introduced SB 1047 last year. In a press release, Wiener said the report was “based on the emergency conversations on AI governance (2024) launched in Congress.
This report appears to be in line with several components of SB 1047 and Wiener follow-up bill SB 53, including requiring AI model developers to report safety test results. To a broader view, it seems like a much-needed victory for AI safes whose agenda was unfounded last year.