New York State Senators passed the bill Thursday. It aims to prevent Openai, Google, and humanity from contributing to disaster scenarios that include more than 100 deaths or injuries or damages of over $1 billion.
Passing the Rays Act represents a victory for the AI safety movement, which has lost ground in recent years, as the Silicon Valley and the Trump administration prioritize speed and innovation. Safety advocates such as Nobel Prize winner Jeffrey Hinton and AI researcher Joshua Bengio have defended the Rays Act. If that becomes law, the bill establishes America’s first set of legally mandated transparency standards for Frontier AI Labs.
The Rays Act had the same provisions and goals as California’s controversial AI safety bill, SB 1047, which was ultimately rejected. But New York Sen. Andrew Gonaldes, co-sponsor of the bill, told TechCrunch in an interview that it had purposely designed the Rays Act to prevent innovation from getting cold among startups and academic researchers.
“Given how quickly this technology is evolving, the windows where Guardrails are installed are shrinking rapidly,” Sen. Gounardes said. “The best thing to do is say that people who know (ai) are very likely (…) that’s amazing.”
The Rays Act is currently heading to the desk of New York Governor Kathy Hochul. There you can sign the bill, send it back for amendment, or reject it entirely.
If the law is signed, the New York AI Safety Bill requires that the world’s largest AI labs publish thorough safety and security reports on frontier AI models. The bill also requires AI labs to report safety accidents related to AI models behaviour and bad actors stealing AI models. If tech companies fail to meet these standards, the Raise Act allows the New York Attorney General to bring civil penalties up to $30 million.
The Raise Act aims to narrowly regulate the world’s largest companies, whether they are based in California (such as Openai and Google) or China (such as Deepseek and Alibaba). The bill’s transparency requirements use companies whose AI models are trained using more than $100 million in computing resources (more than the currently available AI models) and are now available to New York residents.
Like SB 1047, in a sense, the Rays Act was designed to address previous criticisms of AI safety bills, according to Nathan Calvin, the Vice President of State and General Counsel working on the bill and SB 1047.
Nevertheless, Silicon Valley has pushed back New York’s AI safety bill significantly, New York State Legislature member and co-sponsor of the Rays Act, Alex Boeres, told TechCrunch. Bore called the industry resistance not surprising, but argued that the Rays Act never limits innovation in tech companies.
Humanity is a safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, and has not reached an official stance on the bill, Jack Clark co-founder Jack Clark said in a post on X on Friday.
When asked about Anthropic’s criticism, state Sen. Gounardes told TechCrunch that he thought he was “missing the mark” and told him he had designed a bill to prevent small businesses from applying.
Openai, Google, and Meta did not respond to TechCrunch’s request for comment.
Another common criticism of the Raise Act is that AI model developers do not offer the most advanced AI models in New York. That’s a similar criticism brought against SB 1047, and it’s primarily done in Europe, thanks to strict regulations on continental technology.
Assembly member Boa told TechCrunch that technology companies should not require termination of their products in New York, as the regulatory burden of the Raise Act is relatively light. With the fact that New York has the third largest GDP in the United States, withdrawing from the state is not something that most businesses would underestimate.
“I don’t want to underestimate the political pettiness that may happen, but I’m sure there’s no economic reason why they don’t make the model available in New York,” said Congress member Boles.