Montana is the First State to Guarantee Computational Freedom
Key highlights this week:
We’re tracking 986 bills in all 50 states related to AI during the 2025 legislative session.
Arkansas enacted a handful of laws to limit sexual deepfakes, protect IP rights, and mandate government policies for AI tools.
The governor in Maryland signed two bills into law, creating an AI study committee and prohibiting sexual deepfakes.
North Dakota adds two new sexual deepfake bills to its laws, making 35 states with laws addressing sexual deepfakes.
And last week, Montana became the first state to enact a “Right to Compute Act,” which is the subject of this week’s deep dive below.
Over the past month or so, we’ve chronicled the deceleration of the regulatory movement that had high hopes at the beginning of the year to enact AI consumer protection and safety legislation at the state level. The fear of geopolitical competition, especially after the hype around the Chinese DeepSeek model, along with a major policy shift on the federal level under President Trump, has changed the narrative in the states, which were already facing skepticism from governors. But there’s also a small movement in the states to flip the script on AI regulation by enacting a “right to compute.” Last week, Montana became the first state to enact such a bill into law.
Montana lawmakers, who are only in session for a short period every two years, made up for lost time this year by introducing 48 AI-related bills. Two have been signed into law, and another six bills have passed the legislature and are awaiting the governor’s signature to become law. Two additional bills have passed their chamber of origin and await further action before the legislature is scheduled to adjourn for the year on May 3.
Montana’s “Right to Compute Act” was introduced by Senator Daniel Zolnikov (R), who previously backed the state’s data privacy law as well as pro-cryptocurrency laws. Gov. Gianforte (R) signed the “Right to Compute Act” into law on April 16, 2025. This wasn’t Sen. Zolnikov’s first foray into the right to compute debate. In 2023, Sen. Zolnikov passed the country’s first right-to-mine law (MT SB 178), which gave Bitcoin miners the right to mine “without being subjected to undue discrimination or requirements.”
Strict Scrutiny for Compute Restrictions
Sen. Zolnikov’s "Right to Compute Act" (MT SB 212) is broader than crypto. It establishes a fundamental right to privately own and use computational resources for lawful purposes in Montana. Under the law, any government restriction on computational resources must be narrowly tailored to fulfill a compelling government interest. This is known as a “strict scrutiny” standard under constitutional law, which is the highest standard of review that courts use to evaluate the constitutionality of governmental laws. Strict scrutiny is generally reserved for our most fundamental rights, such as the freedom of speech under the First Amendment. This standard places the burden of proof on the government and is very difficult to satisfy.
Risk Management Policies for Critical Infrastructure
However, the law does contain some restrictions on AI use. Particularly, the bill contains a provision for when “critical infrastructure facilities are controlled in whole or in part by a critical artificial intelligence system.” In that specific instance, the deployer must develop a risk management policy. But what’s a “critical infrastructure facility”? The bill points to a definition in current Montana statutes that lists 22 categories of critical infrastructure facilities, including power plants, water systems, telecommunications networks, and major industrial manufacturing facilities.
However, this risk management policy requirements will only trigger if one of those facilities are controlled (“in whole or in part”) by a “critical artificial intelligence system,” which is defined as an AI system “designed and deployed to make, or is a substantial factor in making, a consequential decision.” The definition doesn’t further define consequential decisions, but it does include a list of exceptions, eliminating many more routine AI tasks. Overall, especially when compared to the broad scope of the algorithmic discrimination bills, this is a pretty narrow safety requirement limited to situations where an AI system is making critical decisions at a facility that is vital to the public interests.
Shutdown Mandate Removed from Original Bill
Notably, the version of the bill originally introduced in the Montana Legislature contained a requirement that critical infrastructure be able to perform a full shutdown of the AI system. The legislation was later amended to remove the requirement that a deployer of AI at a critical infrastructure facility must have the “capability to disable the artificial intelligence system's control over the infrastructure and revert to human control within a reasonable amount of time.” This softened the regulations on critical artificial intelligence systems from a shutdown action to only the risk management policy requirement that appears in the enacted version.
Regulatory Pushback
The “right to compute” idea has been enthusiastically backed by proponents of a light regulatory touch for AI. “Most states are going in the wrong direction because of fear-mongering,” Sen. Zolnikov told Pluribus News. “All it does is make it unfair to business.” A bill (AZ HB 2342) aimed more at the crypto mining community (but also applying to AI compute) was signed into law last week, but it’s limited to prohibiting local government from restricting an individual using computational power at their residence.
While the only other explicit “right to compute” proposal — New Hampshire’s proposed constitutional amendment (NH CACR 6) — hasn’t gained the same traction, legislation restricting government use of AI is increasingly popular. And more importantly, the movement to broadly regulate AI this year has failed to accomplish major wins so far. This has all the hallmarks of becoming a major philosophical divide between blue and red states.
Recent Developments
Major Policy Action
Arkansas: Gov. Huckabee Sanders (R) signed several AI-related bills into law this week, including measures to prohibit sexual deepfakes (AR HB 1529 and AR HB 1967), adds sexual deepfakes to child pornography criminal provisions (AR HB 1877), grant intellectual property rights to content generated by generative AI models (AR HB 1876), and require governments to develop policeis on AI and automated decision tools (AR HB 1958).
Maryland: On Tuesday, Gov. Moore (D) signed a sexual deepfakes bill (MD SB 360) and an AI study bill (MD HB 956) into law. The first bill creates a civil action for non-consensual sexual deepfakes, and the second bill creates the Workgroup on Artificial Intelligence Implementation to monitor and make recommendations.
North Dakota: On Monday, Gov. Armstrong (R) signed two sexual deepfake bills into law. ND HB 1386 adds deepfakes to child pornography crimes, and ND HB 1351 provides criminal and civil action for non-consensual sexual deepfakes.
California: The Judiciary Committee amended a proposal (CA AB 412) that would protect copyrighted material from AI models to modify the process copyright owners can request information from a developer. Gov. Newsom (D) also sent a letter to the California Privacy Protection Agency warning them not to enact onerous regulations that could stifle the artificial intelligence sector. We wrote recently about how the agency was already paring back regulations in the wake of shifting political realities.
Nevada: The Senate Committee on Commerce and Labor amended a bill (NV SB 299) that would regulate the development and use of AI systems by requiring AI companies to register with the state and imposing consumer protections. The amendment removes some requirements for AI companies, allows law enforcement and financial institutions to use AI, and requires social media platform get opt-in consent from a user before personal information is used to train AI systems.
Texas: The House substituted and passed TX HB 149, the pared-down AI regulation bill, sending the measure on to the Senate. The substitution deleted a provision prohibiting political viewpoint discrimination by an AI system. Additionally, the Senate Business & Commerce Committee amended a bill (TX SB 1964) that would create a framework for the ethical and transparent use of AI by state and local governments. The amendment tweaks definitions, aligns standards with federal AI risk management frameworks, directs the advisory board to recommend the elimination of rules that restrict the innovation of AI systems, and clarifies that government disclosure of interactions with AI with consumers is not necessary if a reasonable person would know the interaction is with AI.
Notable Proposals
Kansas: Last week, a Senate bill (KS SB 186) was sent to Gov. Kelly (D) that was amended in conference committee to insert a provision to include artificially generated visual depictions into sections on sexual exploitation. Gov. Kelly has ten days to sign or veto a bill, or it automatically becomes law.
New York: Sen. Kristen Gonzalez (D) introduced a bill (NY S 7599) that prohibits government agencies from using or acquiring automated decision-making systems in contexts that affect public assistance, individual rights, or civil liberties unless those systems are subject to ongoing, meaningful human review.