Which Laws Would the AI Moratorium Block (If It Survives)?
Key highlights this week:
We’re tracking 1,064 bills in all 50 states related to AI during the 2025 legislative session.
Hawaii enacted a law prohibiting the use of facial recognition software with any automated traffic enforcement system.
The Kentucky Artificial Intelligence Task Force met to hear from the business community this week.
And the Congressional efforts to preempt state and local AI laws have progressed, although the language has shifted, and with growing opposition, which is the topic of this week’s deep dive.
As we reported back in May, the Congressional budget reconciliation legislation (the “One Big Beautiful Bill Act”) heading towards the President’s desk includes a provision that would place a ten-year freeze on state regulation of artificial intelligence. However, to overcome the Bird Rule in the Senate, the AI moratorium has undergone significant changes and is not guaranteed to become law in its current form. This contentious measure cleared a key procedural hurdle in the Senate this week, but as it advances through the legislative process, the AI moratorium provision still faces scrutiny, and lawmakers’ concerns must be resolved if it is to survive. And if it is enacted, there are still many questions about which state and local laws it would potentially block, which is a question the courts will likely need to decide.
The version of the AI moratorium that we outlined in May was the original House version of the reconciliation bill (US HR 1). It would have imposed a ten-year moratorium on enforcement of any state or local law or regulation “limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.” The provision is meant to curtail a perceived “patchwork” of different state laws, such as the AI regulation law in Colorado set to go into effect next year.
The bill exempts:
State laws whose primary purpose and effect are to remove impediments to AI deployment or operation;
State laws whose primary purpose and effect is to streamline licensing, permitting, routing, zoning, procurement, or reporting procedures to facilitate the adoption of AI models and systems;
State laws that do not impose any design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models unless required by federal law or other generally applicable law that is not specific to AI models; and
State laws that do not impose a fee or bond unless it is reasonable and cost-based, and AI models are treated in the same manner as other models and systems.
The breadth of the moratorium is not completely known, but it has already raised concerns by some state lawmakers that their work could be nullified. “I don’t want to leave my children or the citizens of South Carolina’s faith to protect them in the hands of the federal government,” said South Carolina Rep. Brandon Guffey (R), a sponsor of numerous AI and tech-related bills this session.
Among the types of state laws that would almost certainly be prohibited are:
Comprehensive regulatory laws that require obligations from AI developers and deployers, like a 2024 Colorado bill that was enacted into law. Such laws require certain disclosures and documentation and impose liability for violations.
Laws that regulate specific use cases for AI would be prohibited, such as laws that:
Require disclosures for consumer interactions with chatbots;
Require individuals affected by a consequential decision in which AI was a substantial factor to have an opportunity to opt out and have an adverse decision reviewed by a human;
Prohibit the use of AI in employment processes if it has a discriminatory effect;
Require digital provenance to be applied to synthetic content generated by AI systems;
Regulate the use of AI in utilization management in the insurance industry; or
Prohibit using AI to set retail prices or housing rental prices.
Provisions of privacy laws that have AI-specific requirements would be prohibited, such as those requiring a right for consumers to opt out of profiling for automated decision making.
Other types of state laws that seem likely to be prohibited under the moratorium:
Social media laws aimed at protecting children that regulate or prohibit the use of algorithmic recommendations;
Laws regulating the testing and deployment of autonomous vehicles;
Laws requiring political deepfake communications to run a disclaimer;
Laws that regulate or prohibit access to AI systems that create sexual deepfakes;
Laws that create specific criminal provisions for the use of AI or deepfakes for fraudulent purposes;
Laws prohibiting the use of unauthorized digital replicas; or
Laws that regulate the use of facial recognition technology.
Then there are those state AI-related laws that would likely be allowed:
Laws that further the development of AI, such as economic incentives, creation of computing clusters, and education and training programs;
Laws that are generally applicable, such as anti-discrimination laws, consumer protection laws, child sexual abuse and revenge porn laws, and consumer data privacy laws (other than AI-specific provisions); or
Laws guiding the use of AI by state government agencies and departments.
The provision is included in the budget bill, which requires a simple majority vote in the Senate, not the typical 60 votes required for cloture. But under the “Byrd Rule,” provisions in the bill must relate to budget outlays and revenues, and not “extraneous matter.”
The House version allocated $500 million to the Department of Commerce to modernize and secure technology systems through the deployment of commercial AI. Proponents argued the moratorium was necessary to ensure the appropriation could achieve its ends without being inhibited by state regulation, a justification that some observers viewed skeptically.
The Senate Committee on Commerce, Science, and Transportation amended the bill by instead tying the moratorium to funds under the Broadband Equity, Access, and Deployment (BEAD) program used for “the construction and deployment of infrastructure for the provision of artificial intelligence models, artificial intelligence systems, or automated decision systems.” The original Senate version would have denied states access to the $42 billion for broadband deployment unless they could certify they were in compliance with a moratorium. Committee Chair Sen. Ted Cruz (R-TX) revised the provision this week, rebranding the moratorium as a “temporary pause” and limiting denial of funds only to the $500 million in broadband funds appropriated by the budget bill. That was enough to earn the approval of the parliamentarian, although Democrats argue the bill language would still restrict access to the full $42 billion under the BEAD program. This distinction is huge, though, as states might be willing to forgo their share of $500 million in order to regulate AI, but giving up their share of $42 billion would be a much bigger ask. The Senate parliamentarian has asked the Commerce Committee to clarify the language.
In addition to opposition from Democrats, Republican Senators such as Rick Scott (R-FL), Josh Hawley (R-MO), John Cornyn (R-TX), Marsha Blackburn (R-TN), and Ron Johnson (R-WI) have raised concerns. Some Republicans have argued the provision infringes upon states’ rights, and a bipartisan group of 40 state attorneys general has signed a letter opposing the moratorium. Sen. Hawley has said he is willing to introduce an amendment to eliminate the provision.
President Trump would like the bill to pass by July 4, but disagreements over numerous parts of the bill may make that deadline hard to meet. Whether the moratorium ultimately survives or not, its inclusion has already ignited a fierce debate over the balance of federal authority, state innovation, and the future of AI governance in the United States.
Recent Developments
Major Policy Action
Hawaii: Gov. Josh Green (D) signed into law a bill (HI HB 1231) that would prohibit the use of facial recognition software with any automated traffic enforcement system.
Pennsylvania: The House unanimously passed a bill (PA HB 811) to require deepfake political communications to run a disclaimer within 90 days of an election. The measure now heads to the Senate, where a similar bill was introduced last year but stalled out.
California: The Assembly Privacy and Consumer Protection Committee advanced a bill (CA SB 529) that would prohibit pricing through a consumer’s device based on certain input data such as geolocation, but deleted a provision that required a disclosure for products priced with an algorithm that read, “Price personalized with your personal information.”
Kentucky: The Artificial Intelligence Task Force met on Thursday to discuss business perspectives on artificial intelligence with a presentation from the Kentucky Chamber of Commerce. Last year, the task force released a report with 11 legislative recommendations.
Notable Proposals
California: A Senate bill (CA SB 69) was gutted and amended to insert provisions that would direct the Attorney General to develop a program to build internal expertise in artificial intelligence, with a report to be delivered to the legislature on key developments in artificial intelligence law and policy, and recommendations for additional state oversight or safeguards.
Massachusetts: Sen. Liz Miranda (D) has drafted a bill (MA SD 3007) that would prohibit the use of an automated decision system that has the effect of discriminating against a person or class of persons on the basis of a protected characteristic. The bill would subject covered entities to audits and require documentation of the system with required notice and an opt out opportunity for affected individuals, and allow individuals to sue for violations.
Michigan: Rep. Sarah Lightner (R) introduced a bill (MI HB 4668) on Tuesday that would impose obligations on certain large AI developers. The proposal would require such developers to implement safety and security protocols to outline how to assess, mitigate, and respond to critical risks, and provide transparency reports with independent third-party audits, but the bill omits obligations on deployers like other regulatory proposals. Lightner also introduced MI HB 4667, which would create criminal provisions for the possession, development, deployment, or modification of an artificial intelligence system with the intent to use it to commit a crime.