New York Finalizes AI Safety Law After Last-Minute Negotiations (RAISE Act)

Weekly Update, Vol. 83.

It’s a new year, which means that 46 states will convene their 2026 legislative sessions, many of which will have a heavy focus on AI policy. Last year, we saw well over 1,000 AI-related bills introduced, with over 100 laws enacted. Most states are entering the second of a two-year legislative session, and with many elections taking place in November, AI policy is expected to be a top issue.

But before we get into 2026, we wanted to take a moment to recap the important AI policy enacted in New York last year, which, in usual fashion, waited right up until the final moments of 2025 to make final decisions on key bills. Perhaps the most high-profile AI proposal in the Empire State was the RAISE Act, which the governor signed in late December, but with some significant changes aligning the bill to California’s own AI safety law enacted last year. 


Before we dig into what the final version of the RAISE Act will do and how it compares to the legislature-approved version and California’s AI safety law (CA SB 53), we must get through some legislative procedure to explain how we got here in the first place. 

How’d we get here?

Lawmakers approved the RAISE Act (NY AB 6453 / SB 6953) back in June 2025, but because of some unique aspects of the legislative process in New York, the governor had until the end of the year to act on the bill. Speculation about whether she would sign it into law or veto the bill stretched out until December 19, when Gov. Hochul (D) signed the bill into law, but with significant changes. 

The extended timeline for gubernatorial action is not the only complication for a bill after gaining legislative approval in New York. The governor has another tool called “chapter amendments.” This is an option for the governor to negotiate changes to the bill with legislative leaders and the bill sponsor. These changes can range from technical adjustments, substantive threshold changes, or significant dilutions of legislation approved by the legislature. In return for the requested changes, the governor will sign the original bill, and legislative leadership will introduce a separate piece of legislation to implement the agreed-upon changes. 

That’s what happened here. After the governor reportedly threatened to replace the entire legislative text of the RAISE Act with the AI safety law enacted in California or else veto the measure outright, the bill’s sponsor, Assemblymember Alex Bores (D) and Governor Hochul (D) negotiated a compromise that saw much of the text replaced with the similar California law but also preserved several important provisions from the RAISE Act approved by lawmakers. Lawmakers introduced the legislative vehicle (NY AB 9449) to incorporate those chapter amendments to the RAISE Act a day before the New York Legislature gaveled into its 2026 session on Wednesday. 

With all that procedural preamble out of the way, what will the new AI safety law in New York do? 

Who needs to comply?

The final version of the RAISE Act conforms to the California law’s version of which models are covered under the law. RAISE originally applied to “frontier” AI models with both a computational (trained using greater than 10^26 computational operations) and training cost threshold (compute costs exceeding $100 million). But the new version matches California’s bifurcation of covering both developers of “frontier” models (trained using greater than 10^26 computational operations) and “large” frontier models (the same compute threshold plus developers with $500 million in gross revenues) with heightened reporting requirements on the latter. At the end of the day, both proposals successfully cover the most advanced developers of AI models.

However, the original RAISE Act also applied to “distilled” models, which is a technique where a smaller AI model is trained to mimic the performance of a larger, more powerful AI model. In the final version, distilled models are left out of the definition of frontier models and therefore not covered by the law’s requirements. 

The final RAISE Act, similar to earlier versions, exempts accredited colleges and universities engaged in academic research. It also explicitly limits the scope of the law to “frontier models that are developed, deployed, or operating in whole or in part in New York.” Neither of these provisions are found in the California law, and the latter provision could help protect the RAISE Act from legal challenges under the Dormant Commerce Clause. 

Have a safety plan

Now that we know which AI models are covered by the new law, how well does the final version of the RAISE Act comport with the memo that Asm. Bores released when he first introduced the bill? That memo outlined that “when dealing with one of the most promising and dangerous technologies humans have ever developed, labs need to do four things”:

  1. Have a safety plan;

  2. Have a third-party review the safety plan;

  3. Not fire employees that flag risks; and

  4. Disclose major security incidents.

The first requirement of “have a safety plan” is still very much the core part of the law. The original RAISE Act would have required covered model developers to implement and publish a safety and security protocol, including protections against unauthorized access and misuse. A redacted version of the protocol would be available to the public, but the New York Attorney General could access the unredacted version. 

Under the final version, in order to align with the California law, a new office within the New York Department of Financial Services (DFS) will oversee the law and receive the safety and security reports from developers instead of the AG. As in the California law, the reporting requirements distinguish between frontier developers and “large” frontier developers. Frontier developers will need to publish transparency reports before deploying new frontier models, including information about the model's intended uses, supported languages, and restrictions. Additionally, frontier developers that qualify as “large” under the law (those with over $500 million in annual revenue) are required to publish and comply with a frontier AI framework detailing how they assess and mitigate catastrophic risks.

Finally, the final version of the RAISE Act will require large frontier developers to file disclosure statements with the state, renewed every two years, and pay pro rata assessments to fund the regulatory office. This pro rata assessment provision was not found in earlier versions of the bill nor in the California law. 

Going forward, the RAISE Act has authorized the DFS to adopt regulations to implement that law as needed. The law specifically states that DFS may consider “additional reporting or publication requirements for information to facilitate safety and transparency.”

Have a third-party review the safety plan

The second requirement outlined in Asm. Bores’ original memo was actually removed in a previous amendment to the bill. As introduced, the legislation would have required large developers to retain third-party auditors to verify compliance and submit annual reports to the attorney general. The final report from California’s AI policy working group included a recommendation for third-party audits, stating that “transparency and independent risk assessment are essential to align commercial incentives with public welfare.” A third-party audit requirement did not make it into the final versions of either the New York or California laws, but we expect AI safety advocates to push to expand the current laws to include a third-party audit requirement in the future. 

Not fire employees that flag risks

The whistleblower protections found in the introduced version of the RAISE Act were also removed during a previous amendment to the bill. Originally, the legislation would have established whistleblower protections for employees and contractors of AI developers who report AI-related safety risks. But Asm. Bores explained that his office believes that “most (if not all) of the conduct that was covered in the whistleblower section is already covered in the New York Labor Law.” Notably, whistleblower protections were included in the California law, which is where most of the major AI developers are located. 

Disclose major security incidents

The last requirement outlined in Asm. Bores’ original memo did make it into the final law, although with some modifications. The original RAISE Act would have required covered developers to disclose safety incidents to the attorney general within 72 hours of discovery. The final version of the law instead requires reporting to the Department of Financial Services instead of the AG. Importantly, the final RAISE Act retains the 72-hour reporting requirement, which differs from the California requirement to report critical safety incidents within 15 days of discovery. The final version also adds a reporting requirement for incidents that pose an imminent risk of death or serious injury to authorities within 24 hours, which mirrors the California law.

Don’t deploy unsafe models

In our analysis of the RAISE Act when it was first introduced, I added a “don’t deploy unsafe models” to the requirements outlined by Asm. Bores within his memo. This provision was ultimately removed in the governor’s chapter amendments. 

In the original RAISE Act, developers were prohibited from releasing a model that the developer determined would create an unreasonable risk of “critical harm” to the public. The new version of RAISE eliminates this requirement and shifts the harm threshold from “critical harm” (i.e., either 100 deaths/serious injuries or a billion dollars in damages) to "catastrophic risk" (i.e., either 50 deaths/serious injuries or a billion dollars in damages) to align with the California law. Instead of outright prohibiting a developer from releasing a model it determines would create an unreasonable “catastrophic risk,” the law only requires the developer to submit a report assessing a model’s catastrophic risk. 

Enforcement 

The state attorney general remains the law’s enforcer under the final version of the RAISE Act. However, the civil fines that developers could face for violating the law are lowered significantly under the final law, which allows fines of up to $1 million for initial violations and $3 million for subsequent violations. These are an order of magnitude lower than the $10 million and $30 million fines found in the original law. However, the California law caps fines at $1 million per violation. There remains no private right of action under either the New York or California laws. 

Importantly, the California law went into effect on January 1, 2026, whereas the RAISE Act will now go into effect in 2027. That gives frontier developers a year of complying with the California law before the similar New York law becomes effective. 

Additional New AI Laws in New York

While the RAISE Act was the headliner, New York enacted ten AI-related laws in 2025. Another high-profile law in New York was slipped into a budget bill signed in May. That law addresses companion chatbots (NY AB 3008/SB 3008), requiring an AI companion to contain protocols for addressing self-harm expressed by a user and requires notice to the user. We’re expecting AI chatbots to be a major focus for lawmakers in 2026. 

Here’s a quick rundown of the remaining eight AI-related laws enacted last year in New York. 

  • NY AB 8295/SB 7599, which requires state agencies to publish a list of automated decision-making tools on their website. 

  • NY AB 8887/SB 8420, which requires advertisements with synthetic performers to include a disclosure. 

  • NY AB 8882/SB 8391, which adds definitions, including “digital replica,” to the state’s right of publicity law. 

  • NY AB 3307/SB 1840, which incorporates recommended amendments to the Uniform Commercial Code regarding emerging technologies. 

  • NY AB 1417/SB 7882, which prohibits rental property owners and managers from agreeing not to compete on rental terms, including through algorithmic devices.

  • NY AB 3003/SB 3003, a budget bill that provided $90 million in capital grants for the Empire AI initiative. 

  • NY AB 3005/SB 3005, a budget bill that prohibited sexual deepfakes depicting child sexual abuse material. 

  • NY SB 822/AB 433, which amends state law on AI by state agencies in hiring.

Next
Next

How States Are Approaching AI Developer Definitions in AI Legislation