California Agency Retreats on Bold AI Regulation Plans
Key highlights this week:
We’re tracking 977 bills in all 50 states related to AI during the 2025 legislative session.
Virginia enacted a law restricting AI decision-making in the criminal justice field.
Governors in Montana and New Jersey signed new deepfake bills into law.
And Kansas joined a handful of other states in banning the Chinese AI model DeepSeek from government devices.
Last Friday, the California Privacy Protection Agency (CPPA) considered proposed regulations on automated decision-making technology (ADMT), cybersecurity audits, and risk assessments, ultimately directing staff to revise the proposals in response to public comments. Board members called for some of the more aggressive regulatory proposals to be either narrowed in scope or deleted. While the agency signaled it remains committed to protecting consumer privacy, last week’s meeting shows a retreat on regulation due to industry pushback and a recognition of shifting political winds.
Last fall, we wrote about California’s attempts to regulate ADMT when the CPPA voted to initiate formal rulemaking. The proposed rules would require assessments to ensure ADMT works as intended and does not discriminate against protected classes. The rules would also require notice to consumers before ADMT is used with certain disclosures and a right to opt-out. The agency received industry pushback during the comment period, which influenced its decisions last week.
“All of those folks who wrote in should recognize this as a really big deal, and that the agency is listening and paying real attention to these concerns,” said board member Drew Liebert.
During the meeting, the CPPA Board directed its staff to make key revisions to the draft regulations. They favored a narrower definition of “automated decision-making technology,” limiting its scope to technology that replaces or substantially facilitates human decision-making, aligning the regulations more closely with Colorado’s AI law (CO SB 205, which was enacted last year but won’t go into effect until next year). Board members also expressed concern that risk assessment requirements made the state an outlier or could violate First Amendment rights and directed staff to mirror language in the Colorado law.
The Board also wrestled with what constitutes a “significant decision” that would trigger ADMT requirements. The proposed regulations define a “significant decision” as a decision that results in access to certain listed services like financial services, housing, health care services, or the “allocation or assignment of work.” That last point raised concerns that the scope may be too broad and could include things like selecting a delivery driver based on proximity. Other concerns were raised, including decisions on insurance and criminal justice in the scope of the regulations, with suggestions to exclude those in future revisions. Staff proposed replacing “access to” with “selection of a consumer for” or even deleting the phrase, but were ultimately directed to return with more use cases to better define the scope.
The proposed regulations would also require risk assessments for processing consumer personal information to train ADMT and artificial intelligence models for certain uses, including generative AI. The Board again wrestled with the scope, considering narrowing it to cases where the business “knows or should have known the technology would be used for certain purposes,” rather than if the technology has the capability for those purposes. Ultimately, the Board scrapped the requirement altogether for AI models and directed staff to apply the risk assessment requirements only for ADMT.
Under the proposed regulations, behavioral advertising triggers a risk assessment and ADMT notice and opt-out requirements. The Board directed staff to delete behavioral advertising requirements, noting that cross-context behavioral advertising would still be subject to obligations.
The mere scope of the regulations received pushback from members, with some worrying that the regulations were evolving from a privacy law into an employment law. The idea of regulating AI with the CPPA was questioned with board member Jeffrey Worthe saying, “I don’t believe we were intended to be regulating AI with this in this organization. . . . In my view, it’s a lot easier to dial things up a year or two or three from now, than it is to dial it back down.” The Board seemed to recognize the mood against regulation and that overreach could generate a backlash that could jeopardize regulatory efforts on privacy or even pose an existential risk for the agency.
The reassessment by the CPPA is further evidence that the push for AI regulation has lost momentum this year. The only state legislation to be enacted so far this year are narrow-scoped bills focused on use cases, such as political and sexual deepfakes. Efforts at broader AI consumer protection and algorithmic discrimination laws have been stymied despite widespread introductions in the states. Executive pushback has doomed leading bills in Connecticut and Virginia, and Texas lawmakers have scaled back efforts at a “red state model.” On the federal level, the Trump Administration has replaced the Biden order on AI in favor of a more hands-off approach.
In California, the agency’s willingness to revise its draft rules in response to public and industry feedback reflects the tension between the need to set guardrails on the technology against allowing the emerging industry to flourish without stifling regulations. The Board’s proposed revisions also show that Colorado may have set a template for others to follow by being the first mover in this policy space. California will still be a major player in AI regulation, but there may not be as much enthusiasm for regulation as previously thought.
Recent Developments
Major Policy Action
New York: A bill (NY A 6545) to impose liability for damages caused by a chatbot impersonating certain licensed medical professionals was amended this week to include a ban on the use of a chatbot as an attorney-at-law. New York judges reprimanded a plaintiff this week for using an AI avatar to defend himself.
Virginia: Last week, Gov. Younkin signed into law a bill (VA HB 1642), which prohibits criminal justice-related decisions from being made solely based on a recommendation or prediction of an AI tool.
Montana: On Monday, Gov. Gianforte (R) signed a sexual deepfake bill (MT HB 82) into law. The bill adds computer-generated child pornography to provisions prohibiting child pornography and sexual abuse. Montana joins 20 additional states prohibiting the creation of AI-generated CSAM. On Tuesday, lawmakers sent the governor the “Right to Compute Act” (MT SB 212), which would prohibit the government from restricting the ability to privately own or make use of computational resources for lawful purposes.
New Jersey: Last week, Gov. Murphy (D) signed a deepfake bill (NJ A 3540) into law. The bill makes it a crime to generate deceptive audio or visual media with the intent that it be used as part of a plan or course of conduct to commit any crime including sexual offenses, harassment, or fraud.
California: A Senate bill (CA SB 243) to regulate companion chatbots was amended and approved by the Judiciary Committee. The measure now heads to the Health Committee and advocates are pushing for the bill to become a national model.
Kansas: Gov. Kelly (D) signed legislation (KS HB 2313) prohibiting the use of the artificial intelligence platform DeepSeek and other artificial intelligence platforms of concern on state-owned devices and on any state network. Kansas becomes the first state to approve such legislation, although Iowa, New York, South Dakota, Texas, and Virginia have ordered similar bans through executive action.
Notable Proposals
Alabama: Rep. Price Chestnut (D) introduced a measure (AL HB 516) that would require businesses to clearly inform consumers when they are interacting with a chatbot, AI, or similar technology. Lawmakers also introduced bills in each chamber (AL HB 515 and AL SB 294) to regulate the use of AI by insurers.
Maine: Senator Mike Tipping (D) introduced a bill (LD 1552) to prohibit landlords for residential rental properties from using algorithms or AI to determine how much rent a tenant should pay.
Pennsylvania: Lawmakers this week introduced a bill (PA HB 1188) to prohibit AI applications and software developed by a foreign adversary, such as the Chinese-based DeepSeek app, from being downloaded or accessed on state-owned devices.