California's New AI Safety Bill Rises from the Ashes of SB 1047

Key highlights this week:

  • We’re tracking 1,071 bills in all 50 states related to AI during the 2025 legislative session.

  • State AI laws are safe for now after Congress removed the AI moratorium provision from the reconciliation bill before it passed the Senate. 

  • Rhode Island enacted both sexual deepfake and political deepfake laws.

  • Governors in Connecticut and Pennsylvania signed deepfake bills into law as well, addressing sexual deepfakes and deepfake fraud, respectively. 

  • And Sen. Wiener is back with a robust AI safety bill in California, which is the subject of this week’s deep dive. 

After spearheading last year’s headline-grabbing AI safety bill, California Senator Scott Wiener (D) is back. This week, Sen. Wiener announced that he’s “expanding my AI bill into a broader effort to boost transparency [and] advance an industrial policy for AI” in California. “We need transparency [and] accountability to boost trust in AI [and] mitigate material risks . . . We also need to accelerate [and] democratize AI development.”

Last year, Sen. Wiener’s Safe and Secure Innovation for Frontier AI Models Act (CA SB 1047) was at the center of a fierce debate once the legislature sent the bill to the governor’s desk. When Gov. Newsom (D) ultimately vetoed the bill, many expected Sen. Wiener to return with another AI safety proposal in 2025. However, Sen. Wiener remained mostly quiet on the AI safety front. The bill he introduced in February was largely a whistleblower protection bill for workers at AI labs. And while AI safety proponents supported such measures, there was some disappointment that this year’s version did not include reporting or auditing requirements that were championed under last year’s SB 1047. 

Gov. Newsom’s AI Working Group

However, many were keeping an eye on Gov. Newsom’s AI policy working group, which was due to release a report outlining responsible guardrails for the deployment of generative AI. The Joint California Policy Working Group on AI Frontier Models is headed by three academic AI researchers, including the “godmother of AI,” Dr. Fei-Fei Li, who lobbied against Sen. Wiener’s AI safety bill last year. As we reported in June after the release of the group’s draft report, perhaps the most surprising aspect of the report is how closely its policy principles echo the final version of SB 1047 vetoed by Gov. Newsom.

Sen. Wiener himself sounded encouraged by the draft report’s content, saying in a statement, “The brilliant research and thoughtful recommendations laid out in this report build on the urgent conversations around AI governance we began in the legislature last year, providing valuable insight from our brightest minds on how policymakers should be thinking about the future of AI systems. . . . this report affirms with cutting edge research that the rapid pace of technological advancement in AI means policymakers must act with haste to impose reasonable guardrails to mitigate foreseeable risks, and that California has a vital role to play in the regulation of these systems.”

That report, released in final form last month, had a strong emphasis on transparency and reporting and perhaps gave the green light to Sen. Wiener to incorporate the report’s findings into his AI safety proposal this year. And this week, that’s exactly what he’s done by amending SB 53. 

Transparency in Frontier Artificial Intelligence Act (TFAIA)

This year’s bill (CA SB 53) is called the Transparency in Frontier Artificial Intelligence Act (TFAIA), which would require large developers of AI foundation models to publish detailed safety and security protocols describing testing procedures, risk assessments, and mitigation strategies for catastrophic risks.

Defining “Large Developer”

Similar to SB 1047 before it, and the RAISE Act in New York, TFAIA would apply only to large, frontier AI model developers. And it defines “large developer” with a compute threshold (“computing power greater than 10^26 integer or floating-point operations”). But unlike those similar proposals, TFAIA does not include a dollar threshold (e.g., RAISE Act requires at least $100 million spent to train the model). Starting in 2027, the bill would allow the California Attorney General’s (AG) office to update this threshold for “large developer” under the bill. This will allow the regulation to evolve along with the state of the technology (and give the AG a ton of power over AI model developers). 

Safety Reporting

Under TFAIA, large AI model developers would be required to publish safety and security protocols related to catastrophic risks. The bill defines “catastrophic risks” as a foreseeable and material risk that a large developer’s foundation model will materially contribute to the death or serious injury of more than 100 people or more than one billion dollars in damage arising from a single incident, scheme, or course of conduct involving a dangerous capability.

As large developers develop new model capabilities, TFAIA would require that they publish transparency reports before deploying new or substantially modified foundation models, including results of risk assessments and third-party evaluations.

Similar to the RAISE Act, TFAIA would require large developers to report critical safety incidents to the Attorney General within 15 days and would prohibit false or misleading statements about catastrophic risk management. However, the RAISE Act requires reporting within a shorter 72-hour period. 

“No Liability”

Importantly, unlike last year’s SB 1047, TFAIA does not make large developers liable for the harms of their AI models. Sen. Wiener, when announcing the amendments on social media, ended one message simply, “No liability.” Instead of holding developers liable if they release a model that causes critical harms, the new version of the bill requires large developers to publish detailed security protocols and transparency reports at or before the time of deployment. These reporting requirements are tied to monetary penalties enforced by the AG. 

CalCompute

A return from last year’s SB 1047, the amendments would establish a consortium within the Government Operations Agency to develop a framework for "CalCompute," a public cloud computing cluster to advance safe and equitable AI development housed at the University of California. 

Whistleblower Protections

The whistleblower protections of the original SB 53 remain, which prevent large developers from retaliating against employees who disclose information about catastrophic risks or violations of the TFAIA to authorities.

What’s Next? 

The bill passed the Senate in May under its previous language; it’s currently in the Assembly, which would need to pass the bill before returning it to the Senate. The Assembly Committee on Privacy and Consumer Protection is the bill’s next stop. 

When announcing the new amendments, Sen. Wiener said, “The bill language today is a working draft, and we will continue to iterate over the next few weeks as the bill advances through the legislative process. The bill will continue to change in response to feedback — if you have any, reach out to me or my office anytime.” With the RAISE Act awaiting action by the governor, which in New York could take until the end of the year, the legislative action has returned to the Golden State when it comes to the AI safety debate. 

Recent Developments

Major Policy Action  

  • Congress: After some last-minute dealmaking, a final compromise on the federal AI moratorium, which would have largely blocked states and local governments from enforcing AI-specific regulations, failed on a 99-1 vote and was removed from the final reconciliation bill in the Senate. We could see preemption return if and when Congress decides to attempt its own version of an AI regulatory bill this or next session. 

  • Rhode Island: Last week, the governor signed sexual deepfake (RI HB 5046 / SB 136) and political deepfake (RI HB 5872 / SB 816) bills into law. One prohibits political communications with synthetic media within 90 days of an election without a disclosure, and the other criminalizes the unauthorized dissemination of sexually explicit images of another person that are created by digital devices.

  • Connecticut: Last week, the governor signed into law a budget bill (CT HB 7287) that included a new crime of unlawful dissemination of sexual deepfakes.

  • Pennsylvania: On Monday, the governor signed a deepfake fraud bill (PA SB 649) into law, which creates a crime of digital forgery if a person knowingly or intentionally creates and distributes a fake digital likeness with the intent to defraud or cause harm.

Next
Next

Which Laws Would the AI Moratorium Block (If It Survives)?