AI Policy Defined: Deployers & High-Risk Systems

Key highlights this week:

  • We’re tracking 1,136 bills in all 50 states related to AI during the 2025 legislative session (which now includes some prefiled bills for the 2026 sessions).

  • AGs in North Carolina and Utah announce the formation of a bipartisan AI Task Force. 

  • New York’s AG announces that the state’s algorithmic pricing disclosure regulation took effect this month. 

  • And as the 2025 legislative sessions wrap up, we’ll take a step back and review critical definitions used by policymakers to regulate AI, such as Colorado’s AI law, which is the topic of this week’s deep dive. 

Artificial intelligence regulation is beginning to take shape in the states, but with different approaches. California’s first-in-the-nation AI safety law targets the largest developers, requiring transparency and safety reporting for so-called “frontier models.” Colorado’s law, which is set to take effect in June 2026, instead focuses on “high-risk” systems and creates obligations for both the developers who build them and the deployers who use them to make “consequential decisions.” Additional laws more narrowly focus on specific use cases of deployer use of AI. What these and other efforts have in common is that they hinge on how lawmakers define the scope of regulation.

This week, we’ll begin a multi-part series examining how enacted laws define important terms in AI policy. This first article will focus on the “deployers” and “integrators” of artificial intelligence models. Next week, we will focus on definitions relating to model-developer regulation.

For companies adopting AI tools, one of the most important questions under recent state laws is whether they count as a “deployer” of artificial intelligence. Colorado’s law (CO SB 205) doesn’t just regulate model developers; it imposes obligations on businesses that use AI systems in ways that can materially affect people’s lives. That means a bank using an algorithm to screen loan applicants, an HR platform reviewing resumes, or even a landlord applying automated tenant scoring could all fall within the scope of a “deployer” under these laws.  

What is a “deployer” (or integrator or distributor, for that matter)?

States generally define “deployers” as those who use — rather than develop — an artificial intelligence system. For example, the Colorado law defines “deployer” simply as “a person doing business in this state that deploys a high-risk artificial intelligence system” and defines “deploys” as “to use a high-risk artificial intelligence system.” While this definition is straightforward, it’s also very broad. 

Some proposals have also explored regulating additional actors in the AI supply chain. An earlier version of the Virginia algorithmic discrimination legislation (VA HB 2094), which Gov. Glenn Youngkin (R) ultimately vetoed, included language defining and regulating “integrators” of AI systems (although those particular provisions were removed before reaching the governor’s desk). It would have imposed regulations on those who “integrate” an AI system into software and sell that software commercially, such as an HR-software company that embeds a third-party resume-scoring model into its applicant-tracking system. 

While we’ve seen the integrator language spread to additional proposals, other lawmakers have tested the waters with new classifications that we haven’t seen elsewhere quite yet. For instance, the original version of Texas Rep. Giovanni Capriglione’s (R) TRAIGA legislation (TX HB 1709) would have regulated “distributors” as well, which were defined as those selling AI systems. We’re still in the relatively early stages of AI regulation, and policymakers are still working with stakeholders to identify, differentiate, and define the important players in this emerging ecosystem. 

How is artificial intelligence defined?

Defining what “artificial intelligence” means is a fundamental question in regulating AI. In fact, we covered this very topic in only our second publication of MultiState.ai. Since those early days, state lawmakers have become more consistent with their definition of AI, typically leaning on the standard established by the Organisation for Economic Co-operation and Development (OECD) in its 2019 AI Principles, a definition that was later echoed by the National Institute of Standards and Technology (NIST) in its 2023 AI Risk Management Framework. Both describe artificial intelligence as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.” This phrasing now appears nearly verbatim in most state bills, including the Colorado law, making it the de facto national baseline. 

However, state lawmakers will occasionally stray from this definition. For example, a Georgia bill (GA SB 167) defined AI as a system that “emulates the capability of a person to receive audio, visual, text, or any other form of information and use the information received to emulate a human cognitive process."

Which types of AI systems do the regulations cover?

The Colorado law, and many proposals such as bills in Connecticut (CT SB 2) and Virginia, seek to regulate a “high-risk” artificial intelligence system, defined as one that is a substantial factor in making a consequential decision affecting a consumer. A bill in Maryland (MD SB 936) would also include systems that are “specifically intended to autonomously make” a consequential decision. A Massachusetts bill (MA HB 94) would use a lower threshold, requiring only that the system “materially influence” — rather than be a substantial factor in — consequential decisions.

An Oklahoma proposal (OK HB 1916) takes a different approach, establishing four risk tiers modeled loosely on the European Union’s AI Act: “unacceptable risk” systems that are “incompatible with social values and fundamental rights” (and therefore prohibited); “high-risk” systems with “significant potential to impact safety, civil liberties, or fundamental rights”; “limited-risk” systems posing moderate risks such as manipulation or deceit; and “minimal-risk” systems presenting little or no user risk.

What types of systems are exempt?

Definitions also typically explicitly exclude systems not intended to be regulated. Colorado’s law articulated a list of exempted systems, which has become a template for other state policymakers to use. The exemptions include anti-fraud technology that does not use facial recognition, anti-malware and anti-virus software, firewalls, spam or robocall filters, calculators, databases, data storage, networking, data-caching, hosting, domain registration, spell-checking, spreadsheets, website loading, and customer chat interfaces, subject to acceptable-use policies. It also excludes technologies that perform narrow procedural tasks or detect patterns. 

Connecticut’s 2025 proposal copied most of Colorado’s exempted list but went further by explicitly exempting internal business systems, such as those used for ordering office supplies or processing payments. Other technologies explicitly exempt from regulation in various bills include autonomous vehicles (MD SB 936 and VA HB 2094), search engines (IA HSB 2094), auto-correct functions and electronic communications (NY A 8884), and “operational technology” (TX HB 1709). 

What is a “substantial factor”?

Many state proposals designate a system as “high-risk” when it makes a consequential decision or serves as a substantial factor in that decision. Colorado defines a “substantial factor” as one that (1) is generated by an AI system, (2) assists in making a consequential decision, and (3) is capable of altering the outcome. Maryland’s proposal was narrower, requiring that the factor be the “principal basis” for the decision, a significantly higher threshold than Colorado. Some bills, like the ones proposed in Connecticut and Rhode Island (RI SB 627), will also exclude those decisions that have had an opportunity for some sort of human oversight.

What is a “consequential decision”?

Not every AI system that influences a decision will trigger regulation under these broad algorithmic discrimination bills. Rather, states would determine the threshold by specifying which categories of decisions count as consequential. The Colorado law defines a consequential decision as one that has a material legal effect or a similarly significant effect on the provision or denial to a consumer, or the cost or terms of:

  • Education;

  • Employment;

  • Financial or lending services;

  • Essential government services; 

  • Health care services;

  • Housing;

  • Insurance; or

  • Legal services 

This is already a very broad list of industries that would encompass much of the economy, yet several proposals would have gone further. The bills in Maryland and Virginia would also have included decisions related to incarceration, such as parole, pardon, and probation decisions, as well as those affecting marital status. A Texas bill (TX HB 1709) would have also included decisions affecting transportation services, residential utility services, or “constitutionally protected services or products.” The introduced version of the proposed Connecticut bill would have also applied to “any automated task allocation that limits, segregates or classifies employees” as well as education decisions that include plagiarism detection, accreditations, certifications, and assessments.

What’s next?

Ultimately, these definitions are critical because they determine the true scope of AI regulation and who must comply under certain obligations. Small drafting differences can meaningfully broaden or narrow the reach of a law, shifting whole categories of tools into or out of the “high-risk” bucket. As states take up new proposals next year, these threshold choices will be essential to watch.

However, outside of the Colorado law, these proposals broadly attempting to regulate "high-risk" AI systems did not catch on, and no states enacted them into law this year. The Colorado law itself has yet to go into effect, and lawmakers further delayed the law’s effective date (CO SB 4) as they negotiate significant changes, which we expect will remove its most onerous provisions on deployers and replace them with heightened transparency requirements. Nonetheless, we don’t expect these concepts to disappear; instead, they’ll likely reappear in novel forms as policymakers attempt to solve the same issues with new solutions. 

Recent Developments

In the News

  • AGs Announce AI Task Force: This week, North Carolina Attorney General Jeff Jackson (D) and Utah Attorney General Derek Brown (R) announced the formation of an AI Task Force, which will partner with frontier model developers, to identify emerging issues related to AI and develop safeguards that AI developers should follow to protect the public as this transformative technology accelerates. The AI Task Force will be organized with help from the Attorney General Alliance. 

Major Policy Action  

  • Congress: Senators Josh Hawley (R-MO) and Mark Warner (D-VA) introduced the AI-Related Job Impacts Clarity Act  (S. 3108), which would require large companies and federal agencies to report whenever AI systems replace or eliminate jobs. Under the proposal, the U.S. Department of Labor would collect this information and issue public reports, giving Congress clearer data on how AI is affecting the workforce.

  • Kentucky: The Kentucky AI Task Force met Thursday to send a list of recommendations to legislative leaders that include strengthening privacy protections, updating agency AI-use rules, coordinating with universities and providers on Medicaid data research, protecting minors online, clarifying AI’s role in licensed professions, planning for data-center siting and infrastructure demands, and integrating AI oversight into existing committees. The group also urges federal action on digital replica rights, consumer protection, data practices, and impacts on small-business.

  • New York: Earlier this month, Attorney General Letitia James (D) issued a consumer alert that the new Algorithmic Pricing Disclosure took effect on November 10. The law requires companies that use algorithmic pricing to inform consumers prominently that their prices are being set using personal data.

  • Virginia: A new review by the Joint Legislative Audit and Review Commission finds that the state AI registry, created under a 2024 executive order, suffers from inconsistent guidance, poor coordination and incomplete reporting by agencies. Agencies may not update or report many AI use-cases, making it difficult to determine their compliance with state standards. These standards require impact analyses and justification when using AI instead of alternative tools.  

Notable Proposals  

  • Michigan: A pair of bills introduced in the House would prohibit dynamic pricing by retailers. One bill (MI HB 5222) would apply to grocery stores, while another (MI HB 5223) would apply to retail box stores and membership warehouse clubs.

  • New York: A bill (NY AB 9219) introduced for next year would regulate AI technology intended for use in regulated professional fields, such as medicine, law, and engineering. The bill would require developers to involve a qualified professional expert substantially in design, training, validation, and risk assessment.

  • Pennsylvania: House Democrats introduced a companion chatbot bill (PA HB 2006) that requires operators to notify a user at the outset of each session that the user is communicating with an AI companion, with a reminder every three hours. The bill also requires protocols to deal with user suicidal ideation and prohibits the AI companion from claiming to be or replacing the services of a licensed mental health professional. Pluribus News reports lawmakers in Georgia and Utah are preparing to introduce companion chatbot bills for next year.

Next
Next

California's AI Chatbot Battle Moves to the Ballot Box