California’s Newsom Signs 8 AI Bills and Vetoes 3 Others

Key highlights this week:

  • We’re tracking 1,109 bills in all 50 states related to AI during the 2025 legislative session.

  • Wisconsin enacted a nonconsensual sexual deepfake law, becoming the 38th state to do so.

  • A judge upholds New York’s surveillance pricing law, and the state becomes the first to pass a law prohibiting rental algorithms. 

  • Massachusetts introduces a frontier model regulation bill modeled after California’s new law.

  • Working groups, legislative committees, and task forces in Colorado, Georgia, and North Carolina continue to study AI policy in preparation for next year’s legislative sessions. 

  • And after signing a key AI safety bill into law, California Gov. Newsom signed 8 additional AI-related bills into law, while vetoing 3 AI bills, which is the topic of this week’s deep dive. 

California has long been a national leader in technology policy, and that continued in 2025, particularly in artificial intelligence legislation. The California State Legislature considered 48 AI-related bills this year, ultimately approving several significant measures. The headline of this year’s enactments is Sen. Scott Wiener’s (D) high-profile AI safety bill (CA SB 53), signed into law by Gov. Gavin Newsom (D). The new law, which we wrote about a few weeks ago, establishes requirements for developers of frontier AI models, focusing mostly on very large models without requiring obligations on deployers. 

Notably, the governor had vetoed a similar AI safety measure from Sen. Wiener last year — the equally high-profile SB 1047. But last year, Gov. Newsom also signed 18 additional AI-related bills into law. This year, Gov. Newsom signed SB 53 first, but 11 additional AI-related bills were sitting on his desk over the past few weeks. Finishing up our late-year spotlight on the Golden State, we highlight below which of those bills the governor signed into law and which got vetoed. 

What Was Enacted

The governor signed 8 new AI bills into law this year, in addition to SB 53. This includes topics such as algorithmic pricing, sexual deepfakes, synthetic content labeling, and companion chatbots.

CA AB 316 prohibits a developer from asserting a defense in a civil case that artificial intelligence autonomously caused the harm to the plaintiff. The bill was amended to clarify that defendants can still present other affirmative defenses or defenses relevant to comparative fault.

CA AB 325 prohibits using or distributing a pricing algorithm that uses nonpublic competitor data. It amends the state’s antitrust law known as the Cartwright Act, allowing a complaint to only show that a contract exists or a conspiracy to restrain trade is plausible, and does not require the allegation of facts tending to exclude the possibility of independent action. 

CA AB 489 prohibits generative AI from implying that any care or advice offered is provided by a licensed person, subject to sanctions from the appropriate health care profession board. Earlier this year, California Attorney General Rob Bonta issued a legal advisory on the application of existing law to artificial intelligence in healthcare, emphasizing that only humans are licensed to practice medicine. 

CA AB 621 amends existing sexual deepfake laws to impose liability on intermediaries who knowingly facilitate or recklessly aid and abet the creation or dissemination of nonconsensual deepfake pornography. This could include hosting providers, payment processors, or content promoters. The bill creates a rebuttable presumption that depiction was nonconsensual unless written consent is provided, and applies a rebuttable presumption of knowledge to third-party service providers that continue to support deepfake porn services after receiving evidence of their illegal activity. The bill also significantly increases civil penalties. 

CA AB 853 relates to the labeling of synthetic content. Developers of generative AI systems with over one million users are required to embed latent disclosures within content generated using their systems and provide free AI detection tools capable of identifying the embedded disclosures. Manufacturers of devices, such as cameras, smartphones with cameras, scanners, and audio recorders, are required to provide users with the ability to embed provenance data in the content they capture as authentication. Large online social media platforms are required to make available system provenance data that can indicate whether content was generated or altered by artificial intelligence. The bill would not take effect until January 1, 2027.

CA SB 243 regulates companion chatbot platforms. Companion chatbots will have to issue a clear and conspicuous notification that the chatbot is not a human. Companion chatbot platforms will also be required to have protocols to address suicidal ideation, suicide, or self-harm content. The new law contains special regulations for minor users, including disclosure of the fact that the chatbot is AI, a notification every three hours discouraging prolonged use, and measures to prevent sexual content. The legislation is in response to some high-profile lawsuits where teenagers committed suicide after interacting with companion chatbots.

CA SB 524 requires law enforcement officials to have policies regarding AI usage and disclose when AI is used to write reports. The office would be required to retain the first drafts and preserve an audit trail.

CA SB 683 strengthens existing digital replica laws by granting a plaintiff the right to pursue an injunction or a temporary restraining order for nonconsensual use of one’s name, voice, signature, image, or likeness for commercial purposes. The measure was supported by the Screen Actors Guild–American Federation of Television and Radio Artists. 

What Was Vetoed

However, the governor vetoed three AI-related measures that the legislature sent to him, including a stronger companion chatbot regulation, the “No Robo Bosses Act,” and a bill to require warnings for digital replica tools. 

CA AB 1064 was known as the Leading Ethical AI Development (LEAD) for Kids Act. The bill would have prohibited developers from producing a companion chatbot intended to be used by or on a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child. Because of the lawsuits in the news, Gov. Newsom faced pressure to sign the bills. In his veto statement, he argued the bill was overly broad to the point it could effectively ban companion chatbots for minors, and that CA SB 243 would provide necessary protections.

CA SB 7 would have regulated the use of automated decision systems (ADS) by employers in a bill known as the “No Robo Bosses Act.” Employers would be required to provide notice when such a system is used in employment decisions. Employers would have been prohibited from using ADS to interfere with union rights, predict worker behavior, or infer protected characteristics like race, gender, or religion. Major labor unions in the state pushed for Gov. Newsom to sign the measure, even after it was watered down in amendments, but he ultimately vetoed the bill. The governor argued the bill was too broad, requiring notification for “innocuous tools” and could prohibit legitimate uses for ADS, such as inputting customer ratings. He noted that regulations recently adopted by the California Privacy Protection Agency would address notification requirements regarding ADS use for employees.

CA SB 11 would have required AI technology that enables a user to create a digital replica, to provide a consumer warning that non-consensual production of digital replicas could lead to civil or criminal liability. The bill also clarifies that the use of a digital replica with the intent to impersonate another is a false personation. Gov. Newsom vetoed the measure over concerns it would do little to address the problem, arguing “it is unclear whether a warning would be sufficient to dissuade wrongdoers from using AI to impersonate others without their consent."

Recent Developments

In the News

  • Lawmaker AI Group Finds New Home: The ad hoc group of nearly 200 state lawmakers representing 48 states has found a new home at Princeton’s Center for Information Technology Policy. Rebranded as the State AI Policy Forum, the group hosted its first invite-only meeting under Princeton’s administration on Sep. 26. Initially, lawmakers organized their own virtual calls in 2023 to discuss AI regulation, and briefly, the group was hosted by the Future of Privacy Forum (FPF) before the nonprofit backed off after criticism from its tech members.

Major Policy Action  

  • Colorado: Gov. Jared Polis (D) has convened a new working group to recommend potential changes to the state’s artificial intelligence law (SB 205) that is set to take effect next year. The group held an organizational meeting on Thursday, and is asked to “prioritize evidence-based policy solutions that mitigate bias, avoid ambiguity, facilitate innovation and economic growth, and are in alignment with national standards and best practices.”

  • Georgia: The Senate Impact of Social Media and Artificial Intelligence on Children and Platform Privacy Protection Study Committee met on October 8 to discuss AI companion chatbots and children’s online protection. Lawmakers heard testimony from parent advocates arguing for more protections for minors as well as representatives from the AI Transparency Coalition and Tech Justice Law Project

  • New York: Gov. Kathy Hochul (D) signed a pair of bills (NY A 1417/S 7882) that would make New York the first state to prohibit rental algorithms that use competitor data. Colorado Gov. Jared Polis (D) vetoed a similar measure earlier this year, and several cities have banned the practice.

  • New York: On Wednesday, a federal judge dismissed a lawsuit challenging a New York law that requires retailers to tell customers when their personal data is used to set prices, known as surveillance pricing. A retail trade group had argued New York's Algorithmic Pricing Disclosure Act violated free speech, but the judge held the law was reasonably related to the state's legitimate interest in giving consumers information regarding transactions.

  • North Carolina: The North Carolina Child Fatality Task Force Intentional Death Prevention Committee held a hearing on October 8, where they heard testimony regarding the potential risks to minors from companion chatbots and AI therapists. A bill on chatbot regulation (NC SB 624) is in the Senate Rules Committee. Advocates also pushed for legislation to prohibit minors’ data from being used in social media algorithms. 

  • Wisconsin: Gov. Tony Evers (D) signed a bill (WI SB 33) into law earlier this month that would make it a crime to publish or distribute a sexual deepfake of an identifiable person with the intent to coerce, harass, or intimidate that person.

Notable Proposals  

  • Florida: A bill prefiled for the next session (FL SB 202) would regulate the use of algorithms by insurers, requiring the denial of a claim to be performed by a human. Insurers told the Insurance and Banking Subcommittee that the bill is unnecessary and that AI is used to improve efficiency in processing claims and detecting fraud.   

  • Massachusetts: Sen. Barry R. Finegold (D) has introduced a frontier model regulation bill  (MA S 2630) that follows the framework seen in California. The proposal requires developers with $500 million in annual revenues to establish a framework to deal with catastrophic risk and requires smaller developers to follow transparency measures. 

  • Ohio: A House bill (OH HB 524) would prohibit an artificial intelligence model from encouraging a user to engage in any form of self-harm or harming of another.

  • Pennsylvania: House members have introduced a bill (PA HB 1925) that would regulate the use of artificial intelligence in healthcare. The bill requires disclosure if AI is used by healthcare facilities in clinical decision-making, prohibits AI from superseding human clinical decision-making, and requires annual compliance statements. The bill also regulates the use of algorithms in utilization review, requires disclosure, and prohibits basing decisions solely on group data.

Next
Next

California Writes the Playbook for AI Regulation