Congressional AI Moratorium Faces Legislative Hurdles as States Push Back

Key Takeaways

  • The state AI regulation moratorium in the One Big Beautiful Bill Act would freeze enforcement of most state AI laws for ten years, but its final scope and survival remain uncertain as the bill moves through Congress.

  • If enacted, the moratorium would block comprehensive state AI laws like the Colorado AI regulation law, as well as state rules targeting specific AI use cases, but would allow laws that promote AI development or are generally applicable.

  • The Senate has revised the moratorium to limit its impact on broadband funding, but debate continues over whether the federal AI preemption law would restrict states’ access to larger infrastructure funds.

  • Lawmakers from both parties and state attorneys general have raised concerns that the state AI regulation moratorium could undermine states’ ability to address local AI issues.

  • The ongoing debate highlights tensions between federal authority and state innovation in AI governance, with the courts likely to decide which state AI laws are ultimately blocked.

  • Update: our updated analysis of the moratorium from November 2025.


As we reported back in May, the Congressional budget reconciliation legislation (the “One Big Beautiful Bill Act”) heading towards the President’s desk includes a provision that would place a ten-year freeze on state regulation of artificial intelligence. However, to overcome the Bird Rule in the Senate, the AI moratorium has undergone significant changes and is not guaranteed to become law in its current form. This contentious measure cleared a key procedural hurdle in the Senate this week, but as it advances through the legislative process, the AI moratorium provision still faces scrutiny, and lawmakers’ concerns must be resolved if it is to survive. And if it is enacted, there are still many questions about which state and local laws it would potentially block, which is a question the courts will likely need to decide. 

The version of the AI moratorium that we outlined in May was the original House version of the reconciliation bill (US HR 1). It would have imposed a ten-year moratorium on enforcement of any state or local law or regulation “limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.” The provision is meant to curtail a perceived “patchwork” of different state laws, such as the AI regulation law in Colorado set to go into effect next year.

The bill exempts:

  • State laws whose primary purpose and effect are to remove impediments to AI deployment or operation;

  • State laws whose primary purpose and effect is to streamline licensing, permitting, routing, zoning, procurement, or reporting procedures to facilitate the adoption of AI models and systems; 

  • State laws that do not impose any design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models unless required by federal law or other generally applicable law that is not specific to AI models; and

  • State laws that do not impose a fee or bond unless it is reasonable and cost-based, and AI models are treated in the same manner as other models and systems.

The breadth of the moratorium is not completely known, but it has already raised concerns by some state lawmakers that their work could be nullified. “I don’t want to leave my children or the citizens of South Carolina’s faith to protect them in the hands of the federal government,” said South Carolina Rep. Brandon Guffey (R), a sponsor of numerous AI and tech-related bills this session.

Among the types of state laws that would almost certainly be prohibited are:

  • Comprehensive regulatory laws that require obligations from AI developers and deployers, like a 2024 Colorado bill that was enacted into law. Such laws require certain disclosures and documentation and impose liability for violations. 

  • Laws that regulate specific use cases for AI would be prohibited, such as laws that:

    • Require disclosures for consumer interactions with chatbots;

    • Require individuals affected by a consequential decision in which AI was a substantial factor to have an opportunity to opt out and have an adverse decision reviewed by a human;

    • Prohibit the use of AI in employment processes if it has a discriminatory effect;

    • Require digital provenance to be applied to synthetic content generated by AI systems;

    • Regulate the use of AI in utilization management in the insurance industry; or

    • Prohibit using AI to set retail prices or housing rental prices.

  • Provisions of privacy laws that have AI-specific requirements would be prohibited, such as those requiring a right for consumers to opt out of profiling for automated decision making. 

Other types of state laws that seem likely to be prohibited under the moratorium:

  • Social media laws aimed at protecting children that regulate or prohibit the use of algorithmic recommendations;

  • Laws regulating the testing and deployment of autonomous vehicles;

  • Laws requiring political deepfake communications to run a disclaimer;

  • Laws that regulate or prohibit access to AI systems that create sexual deepfakes;

  • Laws that create specific criminal provisions for the use of AI or deepfakes for fraudulent purposes;

  • Laws prohibiting the use of unauthorized digital replicas; or

  • Laws that regulate the use of facial recognition technology.

Then there are those state AI-related laws that would likely be allowed:

  • Laws that further the development of AI, such as economic incentives, creation of computing clusters, and education and training programs;

  • Laws that are generally applicable, such as anti-discrimination laws, consumer protection laws, child sexual abuse and revenge porn laws, and consumer data privacy laws (other than AI-specific provisions); or

  • Laws guiding the use of AI by state government agencies and departments. 

The provision is included in the budget bill, which requires a simple majority vote in the Senate, not the typical 60 votes required for cloture. But under the “Byrd Rule,” provisions in the bill must relate to budget outlays and revenues, and not “extraneous matter.”

The House version allocated $500 million to the Department of Commerce to modernize and secure technology systems through the deployment of commercial AI. Proponents argued the moratorium was necessary to ensure the appropriation could achieve its ends without being inhibited by state regulation, a justification that some observers viewed skeptically

The Senate Committee on Commerce, Science, and Transportation amended the bill by instead tying the moratorium to funds under the Broadband Equity, Access, and Deployment (BEAD) program used for “the construction and deployment of infrastructure for the provision of artificial intelligence models, artificial intelligence systems, or automated decision systems.” The original Senate version would have denied states access to the $42 billion for broadband deployment unless they could certify they were in compliance with a moratorium. Committee Chair Sen. Ted Cruz (R-TX) revised the provision this week, rebranding the moratorium as a “temporary pause” and limiting denial of funds only to the $500 million in broadband funds appropriated by the budget bill. That was enough to earn the approval of the parliamentarian, although Democrats argue the bill language would still restrict access to the full $42 billion under the BEAD program. This distinction is huge, though, as states might be willing to forgo their share of $500 million in order to regulate AI, but giving up their share of $42 billion would be a much bigger ask. The Senate parliamentarian has asked the Commerce Committee to clarify the language.

In addition to opposition from Democrats, Republican Senators such as Rick Scott (R-FL), Josh Hawley (R-MO), John Cornyn (R-TX), Marsha Blackburn (R-TN), and Ron Johnson (R-WI) have raised concerns. Some Republicans have argued the provision infringes upon states’ rights, and a bipartisan group of 40 state attorneys general has signed a letter opposing the moratorium.  Sen. Hawley has said he is willing to introduce an amendment to eliminate the provision

President Trump would like the bill to pass by July 4, but disagreements over numerous parts of the bill may make that deadline hard to meet. Whether the moratorium ultimately survives or not, its inclusion has already ignited a fierce debate over the balance of federal authority, state innovation, and the future of AI governance in the United States.

Previous
Previous

California's New AI Safety Bill Rises from the Ashes of SB 1047

Next
Next

New York Governor Must Decide Fate of AI Safety Bill