New York Governor Must Decide Fate of AI Safety Bill
Key highlights this week:
We’re tracking 1,054 bills in all 50 states related to AI during the 2025 legislative session.
California’s AI Policy Working Group released its final report, incorporating new empirical evidence about AI capabilities/risks and substantial policy refinements based on public feedback.
The governor signed a chatbot disclosure law in Maine.
Study committees in Wyoming and Idaho are convening soon to discuss AI policy.
And lawmakers in New York passed a major AI Safety bill this week, but it might have a long road to earn the governor’s signature, which is the subject of this week’s deep dive.
Before it adjourned this week, the New York Legislature passed an important AI safety bill — the RAISE Act — through both chambers last Thursday. Unlike the algorithmic discrimination bills that have proliferated this year, which place requirements on any organization using AI tools, New York’s RAISE Act is an AI safety bill, placing requirements on the developers of the most powerful AI models.
The RAISE Act (NY AB 6453 / SB 6953) is primarily a transparency bill. (We took a close look at the RAISE Act when it was introduced in March.) The bill would require developers of frontier AI systems to draft, maintain, and publish safety documents, take steps to adhere to those safety documents, and disclose to the AG within 72 hours after a safety incident has taken place (a similar timeline and procedure to data breach reporting). The RAISE Act would also prohibit a frontier AI model developer from releasing a model that the developer determines would create an unreasonable risk of “critical harm” to the public (i.e., either 100 serious injuries or a billion dollars in damages).
If you’re a subscriber, click here for the full edition of this update.