Congress’ Plan to Break Up the Patchwork of State AI Laws
Key highlights this week:
We’re tracking 1,009 bills in all 50 states related to AI during the 2025 legislative session.
The Connecticut Senate passed a dramatically scaled-back version of Sen. Maroney’s SB 2, stripping many of the documentation requirements and obligations to mitigate discrimination risks.
The governor signed a sexual deepfake bill in South Carolina.
New York's governor signed a budget bill that includes a section regulating AI companion models.
And the U.S. House is moving a budget bill that includes a provision preempting enforcement of state AI laws for the next ten years, an existential threat to state lawmakers’ attempt to regulate the all-encompassing technology in a vacuum of a federal framework, which is the topic of this week’s deep dive.
For years, companies have pushed for federal preemption in consumer data privacy, to replace a growing patchwork of state laws with a single national standard more favorable to industry interests. But attempts to pass federal legislation have failed, leaving consumer data regulation to the states. With artificial intelligence, tech companies have found a new avenue to preempt potential state regulation through a sweeping ten-year moratorium on state AI regulation quietly embedded in the U.S. House budget reconciliation bill. If enacted, the measure would bar states from enforcing or enacting laws governing AI systems and automated decision-making tools, effectively freezing and negating state oversight.
This week, House Republicans on the Energy and Commerce Committee submitted text to be included in the budget reconciliation bill that includes Section 43201, a ten-year moratorium on enforcement of any state or local law or regulation “regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.”
Last year, states passed 99 AI-related laws, and this year, lawmakers have already enacted 61 such laws. How many of these would be negated by this federal preemption? The language of the preemption provision appears to be aimed at measures like Colorado’s artificial intelligence law (enacted last year but not set to take effect until 2026), other algorithmic discrimination measures, AI safety proposals like last year’s CA SB 1047, and other laws placing an obligation on AI models.
Exceptions to Preemption
The big question is, if enacted, how broad a scope of state and local AI bills would the moratorium preempt? The bill lists four exceptions:
State laws whose primary purpose and effect are to remove impediments to AI deployment or operation;
State laws whose primary purpose and effect is to streamline licensing, permitting, routing, zoning, procurement, or reporting procedures to facilitate the adoption of AI models and systems;
State laws that do not impose any design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models unless required by federal law or other generally applicable law that is not specific to AI models; and
State laws that do not impose a fee or bond unless it is reasonable and cost-based, and AI models are treated in the same manner as other models and systems.
In other words, the preemption would not negate those state laws meant to assist the deployment of AI models, such as Utah’s AI regulatory sandbox. It would also not preempt state laws that generally apply, such as anti-discrimination laws and consumer protection laws that are not specific to AI models. The provision also seems to imply that if a state law does not impose design, performance, data-handling, documentation, civil liability, taxation, or fee requirements, then it will not be subject to the moratorium. This could mean measures that regulate how a state government uses AI could still be allowed, although there is some ambiguity.
Which AI Laws Would Be Blocked?
But the moratorium would presumably apply to enforcement of laws that impose any kind of obligations on models themselves, including requirements to disclose the use of chatbots to consumers, provide opportunities for employees or consumers to appeal adverse decisions where AI is a substantial factor, or provide certain documentation of models or impact assessments. States would be prohibited from requiring developers or deployers to mitigate against the risk of algorithmic discrimination, although general anti-discrimination laws would still apply. State or local bans on pricing or rent algorithms could be suspended, although states may still try to apply privacy or antitrust laws, if applicable.
Preemption’s Effect on Data Privacy
The proposal could also impact state consumer privacy laws, many of which include provisions regulating automated decision-making technologies (ADMT). This moratorium would prohibit the enforcement of any regulations of “automated decision systems,” just as California has drafted landmark regulations that would require certain disclosures and obligations for the deployment of ADMT. The moratorium would leave other privacy protections in place, but could impact enforcement of biometrics laws, such as Illinois’ landmark law.
Are Deepfake Laws Blocked Too?
By far, the most popular state laws enacted related to AI target deepfakes. Would the dozens of deepfake laws be negated by this preemption? U.S. Rep. Jan Schakowsky (D-IL) slammed the moratorium, arguing it would “allow AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI.” But while the bill prohibits the enforcement of laws regulating AI models directly, it is silent about laws regulating content produced by those models. Many state laws regarding sexual or political deepfakes may criminalize the dissemination of manipulated content, while not subjecting the model developer to any liability or requiring any obligations. Exceptions include potential laws that require generative AI models to attach digital provenance to synthetic content, like a proposed bill in New York (NY AB 6540).
How’d We Get Here?
Why is this moratorium being included in a budget reconciliation bill? Industry groups had tried for years to pass a federal consumer privacy bill, but found difficulty threading the needle and getting the 60 votes necessary in the U.S. Senate to pass anything. Having learned from that, the AI industry has been lobbying Capitol Hill to preempt state law, just as states are enacting and beginning to enforce their own regulations.
Initially, many onlookers doubted the chances of an AI preemption bill in Congress because lawmakers would need to come together to compromise on their own version of a regulatory framework for AI in the United States. Instead, the language in this budget reconciliation bill sidesteps that issue, negating state and local attempts to regulate the industry while leaving the federal regulatory landscape undefined.
Overcoming the Byrd Rule
In addition to bypassing the need to come to a decision on a federal regulatory framework for AI, Congress is also sidestepping the need to get to 60 votes in the Senate on a non-budgetary item like AI regulation by including the preemption language in the major budget reconciliation bill moving through Congress to slash spending and extend tax cuts.
The budget reconciliation bill requires a simple majority in the Senate. However, reconciliation was intended as a legislative vehicle for budgetary items only. In the Senate, the bill must survive the “Byrd Rule” that prohibits provisions extraneous to budget resolution policies. To shoehorn the preemption language into a budget bill, the authors attached preemption to a spending provision allocating $500 million to the Department of Commerce to modernize and secure technology systems through the deployment of commercial AI. House Republicans will argue that preemption of state AI laws is necessary to ensure the Department can secure commercial AI systems without being inhibited by state regulation. The Senate parliamentarian will need to rule on whether this argument passes muster; otherwise, an opponent could object to its inclusion and remove the provision. While it's unclear how the parliamentarian would rule on this specific provision, the Senate majority has already expressed their willingness to ignore the parliamentarian to move policy important to them.
What’s Next
The bill now heads to a reconciliation markup for Friday morning, with a floor vote expected before Memorial Day weekend. The bill, which is currently 1,116 pages long and includes a laundry list of major and controversial provisions beyond AI preemption, still faces an uphill climb as the House majority cannot afford to lose more than three votes. House Speaker Mike Johnson (R-LA) has had to address concerns from his caucus over raising the state and local tax deduction and changes to Medicaid, and Sen. Ron Johnson (R-WI) has already slammed the bill for not reducing spending enough.
The proposed moratorium could signal a pivotal shift in the balance of power over AI regulation away from states and toward federal agencies. While Congress has generally been slow to act, lawmakers did recently pass the bipartisan TAKE IT DOWN Act (which currently awaits the President’s signature to become law), prohibiting the nonconsensual online publication of sexual deepfakes, and the Senate is currently considering the NO FAKES Act on digital replicas. Last week, Sen. Ted Cruz (R-TX) pledged to introduce a bill with “light touch” regulation of AI.
If enacted, the constitutionality of the preemption provision will inevitably be challenged in court by state AGs, and the long process of judges interpreting what the language in the short provision actually means in practice begins. But even if this moratorium does not survive the budgetary process this year, it indicates that industry and allied lawmakers are willing to use bold strategies to end the patchwork of state laws and establish a national standard.
Recent Developments
Major Policy Action
Connecticut: The Senate passed a dramatically scaled-back AI regulation bill (CT SB 2), stripping many of the documentation requirements and obligations to mitigate discrimination risks pushed by bill sponsor Sen. James Maroney (D). The measure still requires disclosures to consumers in AI interactions and when high-risk AI systems are a substantial factor in consequential decisions. Gov. Ned Lamont (D), who had expressed reservations about AI regulation, appeared to be more supportive of the stripped-down bill.
South Carolina: On Monday, the governor signed a sexual deepfake bill (SC HB 3058) into law, which makes it unlawful to intentionally disseminate a digitally forged intimate image of another without consent.
California: The Assembly amended and passed a bill (CA AB 412) that would require generative AI developers to provide documentation on copyrighted materials used to train the system and allow requests to rights holders to determine if copyrighted works were used. The amendment extends the time for developers to respond from 7 days to 30, limits requests, and expands exemptions.
New York: Gov. Kathy Hochul (D) signed a budget bill (NY S. 3008) that includes a section regulating AI companion models. The section requires protocols to detect and deal with user expressions of suicidal ideation and self-harm, and a notification every three hours reminding the user that the companion is AI and not a human. Another section of the bill requires disclosure of algorithmically set pricing of goods and services.
Arizona: Gov. Katie Hobbs (D) signed a bill (AZ HB 2678) that will amend child exploitation laws to include deepfakes, and a bill (AZ SB 1295) that makes it a crime to use a deepfake with the intent to defraud or harass others.
Oregon: The Legislature passed a measure (OR HB 2299) that adds sexual deepfakes to revenge porn criminal statutes. The House also passed a bill (OR HB 3696) that would prohibit AI software from being downloaded or accessed on state-issued devices.
Texas: The Senate passed a bill (TX SB 2637) that would require a social media platform to label content posted by bot accounts with a notice stating it was created by a bot and may contain misinformation. The chamber also passed a bill (TX SB 2373) that would prohibit using deepfake content for phishing or financial fraud.
Notable Proposals
New Jersey: A bill filed in the Senate this week (NJ SB 4463) would prohibit advertising an artificial intelligence system as a licensed mental health professional. It is a companion to NJ AB 5603, which was introduced earlier this month.
New York: Sen. Brad Hoylman-Sigal (D) introduced a bill (NY S 7882) on Tuesday that would prohibit using an algorithmic device to coordinate rental property prices. The bill has a companion in the Assembly (NY A 1417), and similar measures have been introduced in 17 other states.