California Writes the Playbook for AI Regulation
Key highlights this week:
We’re tracking 1,093 bills in all 50 states related to AI during the 2025 legislative session.
Gov. Newsom signed a major AI safety bill into law this week, making California the first state to enact such a law.
Regulators in California have also been busy, and there remain another 10 AI-related bills on Newsom’s desk, which is the topic of this week’s deep dive.
As we enter the final months of 2025, California is poised to make the biggest changes regarding AI regulation this year. While notable proposals failed to make it across the finish line in other states, in the last two weeks, California has issued new AI regulations, signed a major AI safety bill into law, and Governor Gavin Newsom (D) still has a substantial AI employment bill awaiting his decision. This comes as two agencies in California have also released final regulations this year addressing AI.
All eyes remain on California as hundreds of bills sit on Gov. Newsom’s desk, including 10 AI-related bills. While the governor has until October 12 to make a decision, he signed a major piece of AI legislation into law well before that deadline.
CA SB 53
Earlier this week, Gov. Newsom signed Sen. Scott Wiener’s (D) CA SB 53 into law. We previously discussed this bill as it navigated the legislative process this year. The new law will require the developers of large, “frontier models” of AI to develop, implement, comply with, publish an AI framework, and provide annual updates. This is the first law that requires AI developers to proactively address safety concerns in the largest AI models. When signing the bill, the governor noted that it struck a balance between the need to protect communities while allowing the AI industry to safely grow and thrive.
A similar bill, the RAISE Act (NY AB 6453/SB 6953), is currently awaiting the signature of New York Governor Kathy Hochul (D), who has until the end of the year to make a decision. If enacted, it may signal a change in how lawmakers believe AI will be regulated. Late last year, it appeared the legislation broadly targeting algorithmic discrimination and consumer protection in “high-risk” decisions was on the verge of spreading after Colorado enacted SB 205. At the same time, Gov. Newsom had recently vetoed Sen. Wiener’s SB 1047, setting back the proponents in the AI safety camp. But those fates have reversed this year, with this year’s version of the AI safety bill becoming law in California and no state following Colorado’s lead on SB 205. In fact, SB 205 itself was delayed and will likely be substantially amended next session before even going into effect. At least that’s the case legislatively.
ADMT Regulations
Shortly before SB 53 was signed into law, the California Privacy Protection Agency (CPPA) issued final rules for automated decision-making technology. The regulations, issued under the California Consumer Privacy Protection Act, require businesses to give notice when using automated decision-making technology to make significant decisions and to provide consumers the ability to view and opt out of the data collection used by these technologies. Similar to many of the high-risk AI bills considered in many states this year, the regulations define a "significant decision” as one “that results in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services.”
Notably, these regulations impact the type of decision-making technology at issue in many high-risk AI bills. However, instead of requiring reports and audits of these technologies, the regulations seek to protect consumers by effectively allowing them to opt out of having AI used to make decisions that can impact their daily lives.
CA SB 7
One of the more notable AI bills still sitting on Gov. Newsom’s desk is CA SB 7, also known as the “No Robo Bosses Act,” which regulates AI’s use in employment decisions. The bill requires employers to provide written notice when AI is used to make employment decisions that materially impact an individual, such as hiring, compensation, responsibilities, scheduling, and discipline, including termination. Additionally, the bill prohibits relying solely on AI to make termination decisions. When AI is relied on primarily to make discipline or termination decisions, that decision must be reviewed by a human.
This bill applies to traditional employees as well as independent contractors, and the language prohibiting the use of AI as the sole source of termination decisions and the requirement to have those decisions reviewed by a human also applies to deactivation decisions. The inclusion of independent contractors is notable, given the state’s past fights over worker classification, and ensures that this bill, if signed into law, would impact most California employers.
This bill also comes before the governor as regulations addressing AI in the workplace, previously approved by the California Civil Rights Council, take effect on October 1. The regulations aim to protect against employment discrimination as a result of the use of AI, prohibit using AI to discriminate against applicants or employees based on a class protected under the California Fair Employment and Housing Act, and provide clarity on how existing anti-discrimination laws apply to AI used in employment decisions.
Regardless of what further action Gov. Newsom takes on the remaining AI-related bills on his desk in the coming weeks, California has taken substantial steps in regulating AI this year, and we’ll need to wait until January to see if other states will take a similar approach.
Recent Developments
Major Policy Action
Federal: On Monday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced the Artificial Intelligence Risk Evaluation Act, dedicated to creating “a risk evaluation program within the U.S. Department of Energy (DOE) dedicated to tracking AI safety concerns related to Americans’ national security, civil liberties, and labor protections.” The bill would require developers of advanced AI systems to participate and prohibit deployment until the developer has complied with program requirements.
California: Gov. Gavin Newsom (D) signed Sen. Wiener’s (D) AI safety bill into law (CA SB 53) a year after vetoing a similar bill from Sen. Wiener. This makes California the first state to enact such a high-profile AI safety bill aimed at AI model developers.
Colorado: Gov. Jared Polis (D) expressed a desire for a “pro-innovation, pro-growth AI policy” at an AI summit in Denver last week. He told industry leaders who favored California’s law over Colorado’s that he wanted changes that “position Colorado as the best state to innovate in AI.”
Maryland: The Joint Committee on Cybersecurity, Information Technology, and Biotechnology held a hearing on Thursday that covered AI use in the private sector and state agencies. Committee Chair Sen. Katie Fry Hester (D) introduced a regulatory bill last session (MD SB 936) similar to Colorado’s law.
Mississippi: An Artificial Intelligence Legislative Task Force made up of state lawmakers toured Mississippi State University last week to learn more about the technology. The task force will have three more meetings before releasing legislative recommendations in mid-December, potentially relating to data privacy, intellectual property, and children’s online safety.
Notable Proposals
New Hampshire: Among the legislative service requests for bills to be drafted for next session include a bill (NH LSR 2609) to regulate rent-setting algorithms and a bill (NH LSR 2812) that will "regulate artificial intelligence technologies.”
Ohio: A House bill (OH HB 469) introduced last week would require developers, manufacturers, and owners of AI to prioritize safety mechanisms designed to prevent or mitigate risk or direct harm. The measure, which is sponsored by the chair of the House Technology and Innovation Committee, would also prohibit AI from being married, owning property, or being named a corporate officer.