2025 Enacted AI Laws: Analysis of Developer & Deployer Mandates
MultiState.ai research was published and last updated on Nov. 21, 2025
This year saw a huge jump in interest in artificial intelligence policy at the state level. Last year, we saw 99 bills enacted into law at the state level after lawmakers introduced 635 bills. This year, MultiState has tracked 136 AI-related bills that states have enacted into law so far during the 2025 legislative sessions.
That’s 136 (10%) out of 1,136 bills introduced in state legislatures this year. This is, of course, a lot of bills, but I’ll quote directly from the methodology section of our public tracker in order to put that number into context:
MultiState tracks hundreds of thousands of bills throughout the legislative process for hundreds of clients on a vast variety of topics. Each client has a different perspective on why they care about certain bills, and we customize our tracking and analysis for each client. Our tracking of “AI-related” bills below is an amalgamation of many different perspectives on “artificial intelligence” policy organized internally into dozens of AI subtopics. For this purpose, “AI-related” is defined broadly, and includes legislation relating to generative AI, forming task forces and committees to study AI policy, “right to compute” and AI regulatory sandboxes, budget items to incentivize AI-related investment in a state or boost education on AI use, as well as legislation addressing other “AI” related technologies such as autonomous vehicles and facial recognition systems. The growing number of bills introduced on the topic of AI is an indication of the high level of interest from policymakers in this emerging technology, but note that less than 15% of bills introduced by state lawmakers actually become law on average.
For regulated entities and their government affairs teams, the most consequential provisions of these laws are those that create specific compliance obligations — or mandates — on private sector actors. For this reason, this analysis of 2025 enacted laws will focus on the new AI-related laws that place a mandate on the developer or deployer of an AI system. We’ve written about these terms extensively, but, essentially, a “developer” is an organization building AI systems and a “deployer” is someone using an AI system as a part of their business.
Deepfakes and Public Sector Mandates
Before examining the minority of laws that impose mandates, it's worth understanding that most of the laws enacted this year (110 of 136 or 69%) do not contain mandates on private sector developers or deployers at all.
Of these laws, most criminalize, prohibit, or provide a private right of action for specific unauthorized uses of an AI tool. The simplest form of this is Utah’s AI Policy Act (extended this year by UT SB 332), which clarifies that if you commit fraud with AI, you’re still guilty of fraud.
The most popular version of conduct-level regulation of AI, by far, is to restrict the creation and distribution of deepfakes. This year, 60 (44%) of the state bills enacted in law address this issue. Notably, a small percentage of these bills do place a mandate on websites that host deepfake media. For example, a new law in Texas (TX SB 441) requires AI apps or websites hosting deepfake media to remove sexual deepfakes within 72 hours after receiving a reasonable request to do so.
Another popular topic of these enacted bills — 19 laws total (14%) — is placing mandates solely on public sector use of AI tools. California's new law (CA SB 524), for instance, mandates that law enforcement agencies reveal whether they created a report using AI, either in its entirety or partially. That’s a mandate, but not on a private sector deployer of AI.
Finally, another 26 enacted laws (19%) this year relating to AI contained no mandates at all and did things like establish or expand an AI study task force (MS SB 2426) or allow testing of autonomous vehicles (CT SB 1377).
Private-Sector Mandates on AI Developers & Deployers
We identified 26 of the 136 new laws (19%) in 2025 that contain specific mandates on private sector AI developers or deployers. A handful of these mandates are relatively broad, while most of them are considerably narrow, often focusing on a specific professional industry or use case. We present each of these new laws below by splitting them into four groupings with a summary that focuses on the mandates required by developers and deployers of AI systems.
Mandates on Developers (Broad)
This year’s most high profile piece of legislation was California Sen. Scott Wiener’s (D) SB 53. This year, Sen. Wiener was able to get his AI safety bill across the finish line after last year’s effort (CA SB 1047) was vetoed by Gov. Newsom. A similar bill, Assemblymember Alex Bores’ (D) RAISE Act in New York, is awaiting a decision from the governor on whether it will join the California measure as the first AI safety measures enacted into law in the states. Rep. Giovanni Capriglione’s (R) Texas Responsible Artificial Intelligence Governance Act (TRAIGA), signed into law this year, got much attention when it was introduced, but the final version stripped out many of the private sector mandates, applying them only to the public sector, while maintaining some broad transparency mandates aimed at private sector deployers.
CA SB 53: Requires a large frontier developer to write, implement, and publish on its website a frontier AI framework that applies to the large frontier developer’s frontier models and describes how the large frontier developer approaches incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. Requires a large frontier developer to transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier model.
TX HB 149: Prohibits using AI to encourage self-harm, harm in others, or engage in criminal activity. Prohibits using AI to infringe upon constitutional rights. Prohibits using AI to unlawfully discriminate against a protected class, although disparate impact is not sufficient to demonstrate an intent to discriminate.
CA AB 316: Prohibits a defendant that developed, modified, or used artificial intelligence from asserting a defense that the artificial intelligence autonomously caused the harm to the plaintiff.
CA AB 853: Beginning in 2027, prohibits GenAI system hosting platforms from knowingly making available GenAI systems that fail to include required latent disclosures in AI-generated content.
Mandates on Developers (Narrow)
Another set of bills typically place mandates on both developers and deployers of AI systems with a narrower focus. This year, the bills in this category that have become law fit neatly into two particular categories of concern for lawmakers: chatbots representing themselves as healthcare professionals and companion chatbots.
CA AB 489: Prohibits the use of artificial intelligence or generative AI to use specified terms, letters, or phrases to falsely indicate or imply possession of a license or certificate to practice a health care profession. Prohibits the use by AI technology of certain terms, letters, or phrases that indicate or imply that the advice, care, reports, or assessments being provided through AI is being provided by a natural person with the appropriate health care license or certificate.
CA SB 243: Requires a clear and conspicuous notification that a companion chatbot is artificially generated if a reasonable person would be misled to believe the person is interacting with a human. Requires a protocol for preventing suicidal ideation, suicide, or self-harm content. Requires a companion chatbot operator to disclose to a minor that the user is interacting with AI, provide a notification every three hours to remind the user to take a break, and include measures to prevent production of sexual content. Exempts customer service bots, video game bots, voice command interfaces that act as a virtual assistant.
IL HB 1806: Requires anyone offering therapy for psychotherapy services, to be a licensed professional, including internet-based AI. Prohibits a licensed professional from allowing artificial intelligence from independent therapeutic decisions, directly interacting with clients through any form of therapeutic communication, generating therapeutic recommendations or treatment plans, or detecting emotions or mental states.
NY A 3008 / S 3008: Requires an artificial intelligence companion to contain protocols for addressing self-harm expressed by a user and requires notice to the user.
NV AB 406: Prohibits schools from using AI to perform the core mental health duties of school counselors, psychologists, or social workers. Prohibits artificial intelligence from practicing professional mental or behavioral health care or making such representations. Allows a mental and behavioral health provider to use artificial intelligence only for administrative support, and requires independent review of any reports of analysis created by AI.
OR HB 2748: Prohibits an entity that is not a human from using specified nursing titles and abbreviations.
Mandates on Deployers (Broad)
While we saw about a dozen bills introduced that we would consider broadly applying mandates to deployers of AI, none of the algorithmic discrimination bills (similar to last year’s SB 205 in Colorado) were enacted into law. The Virginia bill came the closest, but it was ultimately vetoed by Gov. Youngkin (R). And Colorado’s first-in-the-nation AI law (SB 205) itself received a setback this year, after a series of proposals from the original sponsor, which would have scaled back the law’s mandates on deployers significantly, failed to reach a compromise, lawmakers enacted a bill to delay the effective date of SB 205 until June 2026. This gives lawmakers in Colorado another legislative session to amend the law before going into effect.
UT SB 332: Extends the repeal date of the Artificial Intelligence Policy Act to July 1, 2027. That law (UT SB 149) requires certain licensed professionals (e.g., mental health providers) to proactively disclose when a consumer is interacting with AI technology while other professionals (e.g., telemarketers) must disclose AI use when asked by the consumer.
Mandates on Deployers (Narrow)
The largest category (15) of enacted laws in 2025 places mandates on private-sector deployers of AI systems in a narrow fashion. This year we classified 15 AI-related laws (11%) in this category. Like the developers above, the narrow focus indicates that the mandate is limited to a specific economic area or profession. These laws came in a handful of specific categories this year including healthcare (5), chatbots (3), sexual deepfakes (2), algorithmic pricing (2), schools (1), public utilities (1), and food delivery (1).
CA AB 325: Prohibits using or distributing a pricing algorithm that uses nonpublic competitor data. Makes it unlawful to distribute a common pricing algorithm as a part of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce. Prohibits a person from using or distributing a common pricing algorithm if the person coerces another person to set or adopt a recommended price or commercial term recommended by the common pricing algorithm for the same or similar products or services in the state.
CA AB 578: Requires a food delivery platform to include a clear and conspicuous customer service feature that allows a customer to contact a natural person. Allows a food delivery platform to use an automated system to address customer service concerns, if a natural person is provided if the automated system cannot address a customer's concerns.
CA AB 621: Creates liability for a service that enables the ongoing operation of a deepfake pornography that engaged in knowing and reckless facilitation, aiding, or abetting the disclosure of such material.
CO SB 143: Allows contracts for facial recognition software for educational purposes or specific safety-related situations, such as threats to the school, missing students, or banned individuals attempting to reenter. Requires parental or individual consent before collecting biometric data, with an opt-in form that details its use, control, and retention schedule. Requires public notice if facial recognition is used for security purposes. Limits biometric data retention to 18 months.
MD HB 820: Requires that certain carriers, pharmacy benefits managers, and private review agents ensure that artificial intelligence, algorithm, or other software tools are used in a certain manner when used for conducting utilization review. Prohibits replacing the judgment of healthcare providers or serving as the sole basis for denying, delaying, or modifying healthcare services. Requires fair and free application free from unlawful discrimination, and subject to performance reviews and audit.
ME LD 1727: Prohibits using a chatbot in commerce in a way that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless a clear and conspicuous disclosure is made.
NE LB 77: Prohibits an artificial intelligence-based algorithm from being the sole basis of a utilization review agent's decision to deny, delay, or modify health care services based, in whole or in part, on medical necessity.
NH HB 143: Creates criminal and civil liability for the owner or operator of artificial intelligence chat services to knowingly direct communication to a child with the intent to facilitate, encourage, offer, solicit, or recommend that the child imminently engage in sexually explicit conduct, participate in the production of sexual imagery, illegally use drugs or alcohol, commit acts of self-harm or suicide, or engage in violent crimes against others.
NM HB 178: Directs the board of nursing to promulgate rules to establish standards for the use of artificial intelligence in nursing.
NV AB 325: Prohibits a public utility from using artificial intelligence to make a final decision regarding whether to reduce or shutdown utility service in response to a disaster or emergency.
NY A 1417 / S 7882: Makes it unlawful for individuals or entities to knowingly or recklessly help facilitate agreements between residential rental property owners or managers not to compete on rental housing terms, including by operating or licensing a software, data analytics service, or algorithmic device that performs a coordinating function on behalf of or between and among such residential rental property owners or managers.
TX HB 581: Requires websites with a publicly accessible tool for creating artificial sexual material harmful to minors to use reasonable age verification methods. Requires an application for creating artificial sexual material harmful to minors to use as a source of the material an adult 18 or older who has consented for the use of their face and body as a source.
TX SB 1188: Requires healthcare practitioners utilizing AI for diagnostic or other purposes to review all AI-obtained information for accuracy before entering it into a patient's EHR.
TX SB 815: Prohibits a utilization review agent from using an automated decision system to make an adverse determination.
UT HB 452: Prohibits a mental health chatbot from selling or sharing with a third party individually identifiable health information or user input, or advertising a specific product in a conversation unless it is disclosed as an advertisement. Requires disclosure to the consumer that the chatbot is not a human. Creates an affirmative defense if the mental chatbot supplier develops certain policies.
Caveats & Clarifications
First, we realize that the 2025 legislative sessions have not concluded in all the states. As of this writing, seven states are still in session and have not yet adjourned sine die for the year. Importantly, as noted above, while the New York Legislature has finished up its business for the year, the governor is still sitting on a very important AI safety bill that she could (and likely will) sign into law by the end of the year.
Second, as for the number of bills introduced and enacted, note that New York and several states use a companion bill model, which often means that two bills are introduced in order to enact a single law. For simplicity, we’ve combined New York companion bills in the enacted list while leaving them separate in the introduced list. Finally, New Jersey and Virginia have off-year sessions that run from 2024-2025, whereas most other states run a two-year session from 2025-2026. Because of this, two of the Virginia bills on our enacted list were technically signed into law in 2024, but are part of the 2025 legislative session.