New York Governor Must Decide Fate of AI Safety Bill

Key highlights this week:

  • We’re tracking 1,054 bills in all 50 states related to AI during the 2025 legislative session.

  • California’s AI Policy Working Group released its final report, incorporating new empirical evidence about AI capabilities/risks and substantial policy refinements based on public feedback.

  • The governor signed a chatbot disclosure law in Maine

  • Study committees in Wyoming and Idaho are convening soon to discuss AI policy. 

  • And lawmakers in New York passed a major AI Safety bill this week, but it might have a long road to earn the governor’s signature, which is the subject of this week’s deep dive. 

Before it adjourned this week, the New York Legislature passed an important AI safety bill — the RAISE Act — through both chambers last Thursday. Unlike the algorithmic discrimination bills that have proliferated this year, which place requirements on any organization using AI tools, New York’s RAISE Act is an AI safety bill, placing requirements on the developers of the most powerful AI models.

The RAISE Act (NY AB 6453 / SB 6953) is primarily a transparency bill. (We took a close look at the RAISE Act when it was introduced in March.) The bill would require developers of frontier AI systems to draft, maintain, and publish safety documents, take steps to adhere to those safety documents, and disclose to the AG within 72 hours after a safety incident has taken place (a similar timeline and procedure to data breach reporting). The RAISE Act would also prohibit a frontier AI model developer from releasing a model that the developer determines would create an unreasonable risk of “critical harm” to the public (i.e., either 100 serious injuries or a billion dollars in damages).

Who is Covered

The RAISE Act would apply to entities that meet the specific criteria for "large developers" of "frontier models." To qualify under the proposed law, entities would need to have trained at least one "frontier model," which is an AI system trained using greater than 10^26 computational operations and compute costs exceeding $100 million for the primary model. This would likely limit that law’s reach to major AI companies like OpenAI, Google, Anthropic, xAI, Meta — essentially the handful of organizations with the resources to train the most advanced, expensive AI models.

However, there’s a second way that a model could be covered by the requirements of the RAISE Act. A model could also qualify as a “frontier model” under the bill if a developer applies “knowledge distillation” to a frontier model and spends at least $5 million to do so. Knowledge distillation is a technique where a smaller AI model is trained to mimic the performance of a larger, more powerful AI model. The smaller model learns from the outputs of the larger model rather than directly from raw data. So in this case, if a developer distills a smaller model from a larger model, then the smaller distilled model is also considered a “frontier model” under this bill. AI developers often use this technique to train the smaller, cheaper versions of their flagship models (e.g., o3‑mini or Gemini 2.0 Flash-Lite). But knowledge distillation is also the technique that OpenAI has accused Chinese developer DeepSeek of using to train its model off ChatGPT.

The definition excludes accredited colleges and universities engaging in academic research. The bill covers any frontier models that are developed, deployed, or operating in whole or in part in the state of New York. So while the Empire State is not the AI development hub that California is, the “operating” in the state provision will create a de facto national standard. 

The Amended RAISE Act

Recent amendments stripped two of the original RAISE Act’s four key requirements for frontier AI model developers. While the requirements to “have a safety plan” and “disclose major security incidents” remain in the enrolled version of the bill, the requirements to “have a third party review the safety plan” and “not fire employees that flag risks” were removed. 

The third-party audit requirement raised concerns regarding costs and effectiveness for model developers. Originally, the bill would have required frontier developers to retain third-party auditors to verify compliance and submit annual reports to the attorney general. Interestingly, this week’s release of the final report from California’s AI policy working group included a recommendation for third-party audits, stating that “transparency and independent risk assessment are essential to align commercial incentives with public welfare.” Nonetheless, the third-party audit requirement is no longer a part of the RAISE Act. 

The other major requirement of the original RAISE Act that was removed is the whistleblower protections for employees and contractors of frontier AI model developers. But Asm. Bores explained that his office believes that “most (if not all) of the conduct that was covered in the whistleblower section is already covered in the New York Labor Law.”

Additional AI Bills Ready for Governor

New York lawmakers passed four additional AI-related bills before adjourning this week.

  • Algorithmic Price Setting: NY A 1417/S 7882 would prohibit the use of algorithmic devices that perform a coordinating function between landlords in setting rental prices. Similar measures have been introduced in 20 other states, and Colorado Gov. Polis (D) vetoed a similar measure (CO HB 1004) last month.

  • Government AI Use Transparency: NY A 8295/S 7599 would require transparency measures for state government use of AI. Agencies would be required to conduct an inventory of automated decision-making tools used and publish a list on their website. Agencies would have to complete impact assessments and would be prohibited from using such tools to undermine workers' rights or job security, or lay off or displace work for public employees, teachers, or university workers. 

  • Digital Replicas: NY A 8887/S 8420 addresses digital replicas, which are digitally created videos, images, or audio recordings meant to simulate a performer, by requiring advertisements that include a synthetic performer to include a disclosure. 

  • Right of Publicity Update: The legislature also expanded the right of publicity protections that were passed last year and went into effect on January 1, 2025. This year’s bill (NY A 8882/S 8391) updates definitions, deletes an exemption if a disclaimer is disclosed, and clarifies a safe harbor if a digital replica is used in good faith and is taken down upon notice that it is unauthorized.  

CA SB 1047 Redux?

The last major AI safety bill to make it to a governor’s desk was CA SB 1047, which launched major lobbying campaigns from the tech industry and high-profile press coverage (it even got its own mini-documentary). Ultimately, Gov. Newsom (D) vetoed SB 1047. I’d expect a similar tsunami to hit the RAISE Act as Governor Hochul (D) must decide whether to sign it into law. But unlike California, the legislative procedure in New York could allow for as late as a New Year’s Eve decision on the RAISE Act. Governor Hochul could even negotiate specific changes to legislative language in exchange for her signature. 

The Uncertain Journey to Governor Hochul's Desk

We’ll start with the timeline. Technically, the New York Constitution gives the governor 10 days (excluding Sundays) for bills sent by the legislature (30 days if sent while the legislature is out of session), but the state constitution does not indicate when lawmakers are required to send a bill after passage. According to legislative rules, if a bill passes the Assembly first, the bill must be sent 10, 30, or 45 days after the bill is returned by the Senate, depending on the time of year. But if the Senate passed the bill first — which was the case with the RAISE Act — then the bill sponsor can request that the bill be held, typically until the governor sends a request for the bill. 

Wait, why is the governor calling the shots? This is a longstanding agreement between the legislature and the governor’s office in New York. Traditionally, lawmakers will not send a bill to the governor until the governor requests it. This is usually done in batches of bills. But for particularly controversial bills, such as the RAISE Act, the governor can wait until the very end of the year to make a decision. For example, lawmakers in New York passed a non-compete ban in June of 2023, but the governor waited until Dec. 22, 2023, to request the bill and then veto it. 

But the extended timeline for gubernatorial action is not the only complication for a bill after gaining legislative approval in New York. The governor has another tool called “chapter amendments.” This is an option for the governor to negotiate changes to the bill with legislative leaders and the bill sponsor. These changes can vary between technical adjustments, substantive threshold changes, or significant dilutions of legislation approved by the legislature. In return for the governor’s demanded changes, the governor will sign the original bill, and legislative leadership will introduce a separate piece of legislation to implement the agreed-upon changes. Technically, there’s no formal law or rule requiring the legislature to follow through after the governor signs the original bill, but no legislature has dared to find out what would happen if they renege on such an agreement. Notably, Gov. Hochul has utilized chapter amendments more than any of her predecessors (making changes to roughly 1 in 7 bills sent to her). 

While the RAISE Act survived the legislative segment of its journey to becoming law, it still has a long road ahead now that the governor gets to make use of New York’s unique post-passage procedures.

Recent Developments

Major Policy Action  

  • California: On Tuesday, the Joint California Policy Working Group on AI Frontier Models released it’s final report (we reported on the draft report in detail in March), which incorporating new empirical evidence about AI capabilities/risks and substantial policy refinements based on public feedback.

  • Idaho: The 2025 Artificial Intelligence Working Group will hold its first meeting of the year on June 26. The group was formed last year and held two meetings last winter.

  • Maine: Gov. Janet Mills (D) signed a measure (ME LD 1727) that requires chatbots interacting with consumers to clearly and conspicuously disclose to the consumer that they are interacting with a chatbot. The law, which goes into effect on June 20, 2025, would make a violation subject to penalties under the Maine Unfair Trade Practices Act.

  • Mississippi: Gov. Tate Reeves (R) announced a $9.1 million grant program to seven schools to develop artificial intelligence, machine learning, and related technical capacities at the state’s higher education institutions. The initiative, known as the Mississippi AI Talent Accelerator Program (MAI-TAP), is a partnership between AccelerateMS, the Mississippi Development Authority, and Amazon Web Services.

  • Oregon: The Legislature has passed a bill (OR HB 3936) that prohibits AI apps or software from certain Chinese-based companies on state-issued devices. Kansas was the first state to pass similar legislation, although Iowa, New York, South Dakota, Texas, and Virginia have all issued similar bans through executive action.

  • Vermont: Lawmakers passed a political deepfake bill (VT SB 23) before adjourning for the year on Monday. The measure requires a disclaimer for synthetic political content within 90 days of an election, and would go into effect upon being signed into law. 

  • Wyoming: The Committee on Labor, Health, and Social Services will hold an interim committee meeting on June 23, 2025, to consider the use of artificial intelligence in prior authorization within the healthcare industry.

Notable Proposals  

  • Massachusetts: Rep. Andres Vargas (D) introduced a bill draft (MA HD 4827) that would prohibit using an automated decision system to make or influence decisions pertaining to “fundamental opportunities” in a discriminatory way. The bill would require audits and recordkeeping, as well as disclosure to affected individuals with an opportunity to opt out and interact with a human.

  • New Jersey: A bill introduced this week (NJ A 5794) would amend identity theft statutes to prohibit the unlawful impersonation of a person using computer-generated deepfakes. The bill also makes election-related unlawful impersonation a crime of the third degree (or second degree if within 90 days of an election). 

  • Ohio: Last week, House Democrats introduced a political deepfake bill (OH HB 362) that would require a disclosure for synthetic political communications within 90 days of an election.

Next
Next

Scaled Back AI Governance Bill Sent to Texas Governor