
Congress’ Plan to Break Up the Patchwork of State AI Laws
House Republicans on the Energy and Commerce Committee submitted text to be included in the budget reconciliation bill that includes Section 43201, a ten-year moratorium on enforcement of any state or local law or regulation “regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.” For years, companies have pushed for federal preemption in consumer data privacy, to replace a growing patchwork of state laws with a single national standard more favorable to industry interests. But attempts to pass federal legislation have failed, leaving consumer data regulation to the states. If enacted, the measure would bar states from enforcing or enacting laws governing AI systems and automated decision-making tools, effectively freezing and negating state oversight.

Utah's Measured Approach to Mental Health AI
Last year, we noted that Utah’s moderate approach to AI regulation could be a potential model for other states to follow. This year, lawmakers and regulators in Utah followed up on last year’s AI Policy Act with a targeted approach to mental health AI chatbots. Since 2021, Utah has enacted 14 bills related to AI (including 6 this year). The state’s signature AI law is last year’s Artificial Intelligence Policy Act (UT SB 149), which clarifies that the use of an AI system is not a defense for violating the state’s consumer protection laws and requires disclosure when an individual interacts with AI in a regulated occupation. This emphasis on transparency is a hallmark of Utah’s strategy of light-touch AI regulation.

Colorado Proposes Narrowing of Landmark AI Law
In 2024, Colorado became the first state to enact a broad algorithmic discrimination law regulating high-risk artificial intelligence systems. Gov. Jared Polis (D) signed the bill “with reservations”, vowing there would be changes to the law before it went into effect. This week, we saw the first concrete proposal to amend that law with the introduction of SB 318 by Sen. Robert Rodriguez (D), who drafted the original AI law. The proposed changes would significantly pare back the law, loosening its strictest provisions while preserving core protections against algorithmic discrimination.

Montana is the First State to Guarantee Computational Freedom
Montana became the first state to enact a “Right to Compute Act”. Montana lawmakers introduced 48 AI-related bills. Two have been signed into law, and another six bills have passed the legislature and are awaiting the governor’s signature to become law. Two additional bills have passed their chamber of origin and await further action before the legislature is scheduled to adjourn for the year on May 3.

Dueling Approaches Shape Connecticut's AI Policy
Connecticut lawmakers are navigating a delicate balance between encouraging artificial intelligence innovation and imposing meaningful regulatory guardrails, a tension reflected in two major bills recently advanced by the General Law Committee.

California Agency Retreats on Bold AI Regulation Plans
The California Privacy Protection Agency (CPPA) considered proposed regulations on automated decision-making technology (ADMT), cybersecurity audits, and risk assessments, ultimately directing staff to revise the proposals in response to public comments. The meeting shows a retreat on regulation due to industry pushback and a recognition of shifting political winds.

States Rush to Criminalize AI-Powered Fraud
The recent release of multimodal image generation has allowed users to create high-quality images, and when expanded to not just images, but to video and voice, is ripe for fraud.

California's Blueprint for Responsible AI Governance
Gov. Newsom’s AI policy working group released its report, which could quickly be incorporated into new AI safety legislation this session.

Texas AI Bill 2.0: Private Sector Gets a Reprieve
Last Friday, Texas Rep. Giovanni Capriglione (R) filed a new version of his high-profile algorithmic discrimination legislation. The new version removes many of the requirements placed on private sector developers, deployers, and distributors of AI that were a major focus of the original bill.

The AI Balancing Act: States Torn Between Regulation and Innovation
When we previewed the 2024 legislative session, we noted the challenge lawmakers face in regulating artificial intelligence without stifling a nascent industry with tremendous potential. Fifteen months later, those tensions have become even more apparent. Efforts to reign in AI have been hampered by concerns about its effect on the economy, national security, and geopolitics.

The RAISE Act: New York Enters the AI Safety Debate
We’ve got our first major AI safety bill language of the year, this time in New York. Most of the state AI policy attention late last year was squarely focused on Sen. Scott Wiener’s (D) AI safety bill (CA SB 1047) in California until Gov. Gavin Newsom (D) ultimately vetoed the proposal. Last week, Sen. Scott Wiener added language to his placeholder bill (CA SB 53), believed to be this year’s version of the AI safety bill. New York Assemblymember Alex Bores (D) released a detailed AI safety bill (NY AB 6453) inspired by Sen. Wiener’s SB 1047.

Trendsetter Alert: California's 33 New AI Bills Explained
California lawmakers enacted nearly 20 AI-related bills last year, although Gov. Gavin Newsom ultimately vetoed the most high-profile bill — Sen. Scott Wiener’s AI safety proposal (CA SB 1047). This year lawmakers in California have returned with even more ideas on how to regulate AI, introducing a flurry of bills ahead of last Friday’s bill introduction deadline. Most of the proposed bills have a narrow scope but take novel approaches to addressing some of the concerns raised by the emerging technology.

The Return of Connecticut’s SB 2: Algorithmic Discrimination
Last year, many expected Connecticut to become the first to pass artificial intelligence regulation due to the leadership of bill sponsor Sen. James Maroney (D). While his bill easily passed the Senate, it stalled out in the House when Gov. Ned Lamont (D) threatened to veto it. After consulting with stakeholders, Sen. Maroney has returned with a new bill he hopes to shepherd to the finish line this year. But it remains to be seen if Gov. Lamont will again play spoiler.

Taking the Pulse of State-Level AI Health Care Regulation
When proponents talk about the benefits of AI, one area of focus for the future of humanity is health care. Over time, it is hoped that AI can lead to breakthroughs in diagnosing diseases, discovering new drugs, and detecting cancer. Prior to that advancement coming to fruition, the messaging from lawmakers has been clear: we need to ensure consumer privacy and quality of care is protected and this advancement has to be done in a deliberative mechanism to mitigate disruption to health care delivery. This year, lawmakers in several states have introduced legislation addressing AI and its use in the health care industry, particularly for health insurers.

Virginia Moves Legislative Framework for High-Risk AI Systems
Unlike legislatures in 48 other states, the Virginia General Assembly has entered the second year of its two-year legislative session and will wrap up business in a few short weeks as lawmakers prepare for elections this fall. Unsurprisingly, Virginia lawmakers have wasted no time moving major AI legislation, with two major bills passing their chambers of origin this week.

Lawmakers Propose Right of Publicity Protections for Digital Replicas
Last year, state lawmakers introduced hundreds of bills regarding sexual and political deepfakes, enacting several of them. Political deepfake bills sought to protect political candidates from misleading communications while sexual deepfake bills protected individuals from having embarrassing computer-generated images of them distributed. This year, concerns about the use of one’s image have spurred lawmakers in some states to consider legislation concerning digital replicas, which may impact advertising and other commercial media.

State Lawmakers Propose Regulating Chatbots
Chatbots have become pervasive in customer service, health care, banking, education, marketing, and sometimes, purely for entertainment purposes. But following a high-profile lawsuit stemming from a teen who committed suicide after a conversation with a chatbot, the technology has come under further scrutiny. State lawmakers are already introducing bills to regulate their use, which could have an impact on a wide-ranging number of businesses.

The New Wave of Comprehensive Consumer Protection AI Bills
The AI policy trend that I’m keeping a keen eye on this year is what I’ll refer to as the “comprehensive consumer protection” AI bills. These originated last year with the introduction of Sen. Maroney (D) SB 2 in Connecticut. While that particular bill failed to cross the finish line due to a gubernatorial veto threat, legislation originally modeled off CT SB 2 passed the legislature in Colorado and was (reluctantly?) signed by Gov. Polis (D) into law. Notably, that law (CO SB 205) won’t go into effect until February 2026, giving lawmakers time to make necessary amendments.

States Ban AI in Setting Rent
We’re closely watching the introductions of anti-bias legislation inspired by last year’s proposed CT SB 2 and enacted CO SB 205, but they’ll only be a fraction of the bills debating in state capitol hearing rooms this year. Those bills attempt to address major consumer interactions with AI by laying out broad fields like employment, financial and legal services, housing, health care, and insurance. But narrower focused bills that target these industries individually or even a subset of known issues with AI and those sectors, are also popular.

California's Proposed Rules on Automated Decision-Making Technology
California was the first to enact a comprehensive data privacy law and establish an agency to protect consumer data. The California Privacy Protection Agency (CPPA) has been slow to promulgate regulations, last month initiating formal rulemaking to update privacy laws and create new regulations on “automated decision-making technology” — specifically aimed at the emergence of artificial intelligence. These rules could have a broad scope with wide-ranging obligations for businesses that use the emerging technology to facilitate decisions.