California’s Newsom to Decide on AI Safety Bill, Again

Key highlights this week:

  • We’re tracking 1,093 bills in all 50 states related to AI during the 2025 legislative session.

  • A federal moratorium on state AI regulations is back in play in Congress. 

  • And lawmakers in California sent 11 AI-related bills to the governor as they adjourned for the year, including Sen. Wiener’s AI safety bill, setting up a replay of last year’s SB 1047 debate, which is the topic of this week’s deep dive. 

Last week, the California Legislature wrapped up its 2025 legislative session, but not before passing several AI-related bills, including a major AI safety bill aimed at AI model developers (CA SB 53). Known as the Transparency in Frontier Artificial Intelligence Act (TFAIA), Senator Scott Wiener’s (D) measure is designed to protect against catastrophic risk posed by certain “frontier” AI models. If signed into law, it could provide a template for other states to follow.

Unlike the law enacted in Colorado last year, TFAIA focuses only on developers and not “deployers” of AI. Like the RAISE Act passed in New York earlier this summer, the bill seeks to regulate certain “frontier models,” which it defines as “a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.” This compute threshold was originally derived from a 2023 Biden Administration Executive Order and was used in last year’s AI safety bill from Sen. Weiner that was ultimately vetoed. Currently, very few models reach this threshold, although it is expected that more will reach this mark in the near future. Notably, this threshold is higher than the  10^25 integer or floating-point operations standard set in the European Union AI Act.  

Transparency Reports Required for All Frontier Developers

Last year, Gov. Gavin Newsom (D) vetoed Sen. Wiener’s AI safety bill (CA SB 1047), writing in his veto message that he had concerns the measure did not protect against risk from small developers. To address this, Sen. Wiener included provisions that would require transparency reports from all frontier developers, not just certain large developers. 

Before deploying a model, all frontier developers would be required to clearly and conspicuously publish on the internet a transparency report containing:

  • The internet website of the developer;

  • A mechanism that enables a natural person to communicate with the developer;

  • The release date of the model;

  • The languages supported by the model;

  • The modalities of output supported by the model; 

  • The intended uses of the model; and 

  • Any generally applicable restrictions or conditions on uses of the model.

AI Frameworks Required For “Large” Frontier Developers

The bill would require “large” frontier developers — those with at least $500 million in annual gross revenues — to write, implement, comply with, and clearly and conspicuously publish on the internet an AI framework with annual updates. The framework would detail how the developer approaches incorporating national standards and best practices. It would define and assess the threshold to determine whether a model has capabilities that could pose a catastrophic risk, and apply mitigations to address that risk potential.

 A “catastrophic risk” is a "foreseeable and material risk" that the model contributes to the death or serious injury of more than 50 people or more than $1 billion in property damage from a single incident. The law spells out that this includes providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon; engaging in conduct with no meaningful human oversight that is either a cyberattack or would constitute murder, assault, extortion, or theft, including theft by false pretense; or evading the control of its frontier developer or user. 

The framework would require reviewing assessments and mitigations in deciding whether to deploy a model, using third parties to assess potential for risk, and requires revisiting and updating the framework. It would require certain cybersecurity practices to secure the model, and requires identifying and responding to critical safety incidents. A “critical safety incident” includes unauthorized access to the model weights of a frontier model that results in death or bodily injury; harm resulting from the materialization of a catastrophic risk; loss of control of a frontier model causing death or bodily injury; or a frontier model that uses deceptive techniques against the developer to subvert controls or monitoring.

Finally, the framework would institute internal governance practices and assess and manage catastrophic risk.

Large frontier developers would also be required to include in their transparency reports:

  • Assessments of catastrophic risk;

  • Results of the assessment;

  • The extent to which third parties were involved; and

  • Other steps taken to fulfill the requirements of the framework

Large frontier developers would be required to submit a summary of assessments of catastrophic risk to the Office of Emergency Services every three months. 

Whistleblower Protections

The bill would also create whistleblower protections for covered AI developer employees who assess, manage, or address the risk of critical safety incidents for a frontier model. The bill would prohibit a developer from preventing disclosure of activities that pose a specific and substantial danger to public health or safety resulting from a catastrophic risk, or pose another violation under the law. 

Agency Action

The law requires the Office of Emergency Services (OES) to develop mechanisms that allow the reporting of critical safety incidents and to submit assessments of catastrophic risk. OES can adopt rulemaking that designates certain federal laws, regulations, or guidance as acceptable substitutes for California reporting rules if they are as strict as California standards. The bill also allows developers to state they intend to comply with those standards.

The Department of Technology would be authorized to recommend legislation to update the definition of “frontier model” to adapt the scope of the law to changing technology, but not develop its own rulemaking on the scope, as earlier versions of the bill directed. This is a notable change from an earlier version of the bill, which would have given the Attorney General the ability to adjust the threshold in the future. 

Enforcement

Developers can be held liable for failing to publish a compliant document as required, making a false statement, failing to report an incident, or failing to comply with its own framework, with a civil penalty capped at $1 million per violation in an action brought only by the Attorney General. 

Other Provisions

The bill would establish within the Government Operations Agency a consortium to develop a framework for the creation of a public cloud computing cluster to be known as “CalCompute.” The bill also includes a clause that preempts any local ordinances or laws related to frontier developers or the management of catastrophic risk. 

Other Bills

The California Legislature approved ten additional AI-related bills last week, sending them to the governor for his signature or veto:

  • CA AB 316 - Prohibits a developer from asserting a defense in a civil case that artificial intelligence autonomously caused the harm to the plaintiff;

  • CA AB 325 - Prohibits using or distributing a pricing algorithm that uses nonpublic competitor data;

  • CA AB 489 - Prohibits generative AI from implying that any care or advice offered is provided by a licensed person, subject to sanctions from the appropriate health care profession board.

  • CA AB 621 - Creates liability for a service that allows deepfake sexual content.

  • CA AB 853 - Requires online platforms, devices, and generative AI systems to label synthetic content.

  • CA AB 1064 - The Leading Ethical AI Development (LEAD) for Kids Act, which prohibits developers from producing a companion chatbot intended to be used by or on a child.

  • CA SB 7 - The “No Robo Bosses Act,” which would regulate the use of automated decision systems by employers, requiring written notice to employees.

  • CA SB 11 - Requires AI technology that enables a user to create a digital replica, to provide a consumer warning that non-consensual production of digital replicas could lead to civil or criminal liability.

  • CA SB 243 - Requires companion chatbot platforms to take reasonable steps to prevent prolonged engagement and to implement protocols for addressing thoughts of suicide or self-harm.

  • CA SB 524 - Requires law enforcement officials to disclose when AI is used to write reports with required documentation.

What’s Next

The legislature will send approved AI bills to Gov. Newsom, who has through October 12 to sign or veto legislation. Newsom has not indicated whether he intends to sign the frontier model bill, and he has expressed concern in the past that too much AI regulation could damage the state’s competitiveness in the industry. Sen. Wiener amended the bill to conform to recommendations made by Gov. Newsom’s AI Working Group this summer and engaged industry groups in order to gain support for his bill, earning the endorsement of Anthropic. On the other hand, the California Chamber of Commerce and TechNet have lobbied against the bill

While some lawmakers have been eager to impose guardrails on the emerging technology, governors have been hesitant, fearing scaring away investment and innovation in their state. Connecticut was unable to pass AI legislation due to opposition from Gov. Ned Lamont (D). Virginia Gov. Glenn Youngkin (R) vetoed an AI model regulation bill earlier this year, wary of its potential economic impact. In New York, lawmakers have advanced a sweeping measure, but it’s still uncertain whether Gov. Kathy Hochul (D) will sign it. Colorado’s Gov. Polis reluctantly approved an AI bill last year, but lawmakers recently delayed its effective date to buy more time to work out concerns.  

California was the first state to implement comprehensive privacy regulation, but it remains to be seen whether it will take the same pioneering role with artificial intelligence. Gov. Newsom’s decision could determine whether there is still appetite for AI regulation, or whether the laissez-faire trend for the industry will continue. 

Recent Developments

Major Policy Action  

  • Federal: On Tuesday, Senate Commerce Chair Ted Cruz (R-TX) indicated a potential moratorium on state regulation of artificial intelligence was still alive, although he said he had not worked with Sen. Marsha Blackburn (R-TN), who had concerns that her state’s law on copyright protections from AI would be undermined. White House Office of Science and Technology policy director Michael Kratsios endorsed Cruz’s plan. Rep. Jay Obernolte (R-CA) indicated he plans to introduce legislation on the matter, while on Tuesday, Rep. Michael Baumgartner introduced a bill (H.R. 5388) that would impose a moratorium.  

  • California: The California Privacy Protection Agency will meet on September 26 with regulations on automated decision-making technology on the agenda. The board has already scaled back proposed regulations in response to industry concerns.  

  • Kentucky: The Artificial Intelligence Task Force 2025 met on September 11 to discuss artificial intelligence and the retail sector. The meeting also involved discussions of the need to build data centers, AI use at the University of Kentucky, and protecting personal information on government websites from data scraping.

  • Massachusetts: The Joint Committee on Economic Development and Emerging Technologies will have a hearing on September 25 to consider a bill (MA HB 495) that would prohibit the operation of a search engine that automatically returns results using artificial intelligence without user consent. The bill also directs a study on the environmental impacts of artificial intelligence.

  • New York: The Senate Standing Committee on Internet and Technology will hold a public hearing on October 11 to discuss artificial intelligence in consequential or high-risk contexts, and frameworks for auditing AI tools for bias and transparency.

  • Washington: The AI Task Force will meet next week to consider policy recommendations for the legislature for next year’s session. The policy recommendations include measures on training data transparency, AI literacy in schools, AI use in prior authorization in healthcare, state grants for AI innovation, transparency and worker rights in AI use in employment, and recognition of frameworks for AI standards and guidance, and whether or not high-risk AI use needs guardrails.

Notable Proposals  

  • Michigan:  A group of Republican House members introduced a bill (MI HB 4938) that would prohibit the distribution of sexual deepfake content that is not constitutionally protected. The bill also prohibits platforms and websites from hosting, promoting, indexing, or linking to such material, and requires implementation of content moderation tools and a trusted flagger program.

  • New York: Assemblyman Keith Brown (R) has introduced a bill (NY A 9091) for next year that would require search engines to inform users when showing information that was generated using artificial intelligence. The legislative memo cites concerns over misinformation as justification for the bill.

Next
Next

California's AI Bills Face Final Judgment