Colorado Proposes Narrowing of Landmark AI Law

Key highlights this week:

  • We’re tracking 995 bills in all 50 states related to AI during the 2025 legislative session.

  • Kansas enacts a sexual deepfake bill. 

  • The governor of Tennessee signs one deepfake bill into law while lawmakers send another to his desk. 

  • San Diego, California, became the latest city to prohibit the use of AI-driven software from being used to set rental housing prices.

  • And lawmakers introduced amendments to Colorado’s first-in-the-nation broad algorithmic discrimination law, which is the topic of this week’s deep dive. 

In 2024, Colorado became the first state to enact a broad algorithmic discrimination law regulating high-risk artificial intelligence (AI) systems with the passage of SB 205. Gov. Jared Polis (D) signed the bill “with reservations”, vowing there would be changes to the law before it went into effect. This week, we saw the first concrete proposal to amend that law with the introduction of SB 318 by Sen. Robert Rodriguez (D), who drafted the original AI law. The proposed changes would significantly pare back the law, loosening its strictest provisions while preserving core protections against algorithmic discrimination.

We wrote about Colorado’s original AI bill when it passed last spring. The measure places obligations on developers and deployers of “high-risk” AI models to prevent algorithmic discrimination. The law would also require certain disclosures to consumers when high-risk AI systems are used as a “substantial factor” in “consequential decisions,” with an opportunity to appeal an adverse decision with human review. 

The law was originally scheduled to take effect on February 1, 2026, but that would be pushed back to January 1, 2027 if this bill is enacted. The other changes proposed this week would narrow the scope and reduce some of the obligations from the original law. 

Expanding Exemptions

The original law exempted small deployers from some risk management, impact assessment, and disclosure requirements, but this year’s bill would create more small business exemptions. 

Currently, deployers with 50 employees or fewer are exempt from these requirements under the law, but this week’s amendment would expand that exemption to businesses with 500 full-time employees or fewer, starting in 2027. In subsequent years, the proposed exemption threshold would decrease on a set schedule until it applies only to businesses that employ fewer than 100 employees by April 1, 2029. The bill also adds additional exemptions for deployers if the high-risk AI system is used only for hiring external job candidates and the business has fewer than a certain threshold of employees. 

The proposed amendments would also exempt developers from technical disclosures and public summary requirements if they have received less than $10 million in third-party investments, have less than $5 million in annual revenue, and have been actively operating and generating revenue for less than five years if the high-risk AI system produced makes less than a specified number of consequential decisions annually. The bill would exempt from the definition of “developer” those that offer an AI system with open model weights and do not promote the system for use in consequential decisions.

The bill also clarifies the exempted use cases under the definition of “high-risk artificial intelligence system.” The bill explicitly exempts narrow procedural tools for classifying, renaming files, extracting metadata, or detecting pattern deviations after a human assessment​, and adding office support tools like schedulers and inventory trackers to use case exemptions unless they are directly used to make or influence consequential decisions​.

Narrowly Define “Algorithmic Discrimination”

The proposed amendments would narrow the definition of “algorithmic discrimination,” which underpins the regulation. Currently, the law defines "algorithmic discrimination" to mean “unlawful differential treatment or impact.” This year’s bill would remove the disparate impact standard and narrowly define “algorithmic discrimination” to mean the use of an AI system that results in a violation of applicable local, state, or federal anti-discrimination laws.

“Reasonable” Duty of Care No Longer Required

Under the current law, developers and deployers of high-risk AI models must use “reasonable care” to protect consumers from any known or “reasonably foreseeable” risks of algorithmic discrimination. Responding to business complaints that this level of duty of care is too vague, this bill eliminates the “reasonable care” duty requirement. Correspondingly, the bill would remove the notification requirement to the attorney general if such risks arise. 

“Consequential Decision” Changes

The original law defines “consequential decision” as a decision that has a material legal or similarly significant effect on the provision or denial to a consumer of certain opportunities or services. This bill amends how some of those services are defined, adding specificity to financial and essential government services. It also clarifies that decisions concerning housing only apply if they involve a primary residence. 

Importantly, the bill narrows the scope of what qualifies as a “consequential decision” under the law by stating that obligations would apply only if the AI system is the “principal basis” for the consequential decision. This would potentially create an exemption for AI systems where humans are meaningfully involved in the decision-making process.

Finally, the proposed bill would define what constitutes an “adverse” outcome to include denials or cancellations of services, unfavorable changes to terms, offers on materially worse terms, and refusals on terms less favorable than those given to others. Consumers would only be able to appeal decisions “based on incorrect personal data or unlawful information,” but not if the decision is competitive (such as a job opening) or a time-limited decision.

Additional Obligations

The bill would add some new obligations to the existing law. Impact assessments would need to include disclosures on whether the high-risk AI system poses a risk to limiting accessibility (for breastfeeding, pregnant individuals, or the disabled), whether it is at risk of committing an unfair or deceptive trade practice, or whether it poses a risk of violating labor or privacy laws. Impact assessments would also be required to include a description of the system's validity and reliability. 

Developers may still avoid disclosure requirements for trade secrets or other sensitive data under the bill, but would have to notify the party that would have received the disclosure and state the reason for the withholding. Developers would have to include documentation of intended inputs and outputs of high-risk AI systems. Deployers would also have to disclose the sources of the data that the AI system used when making a decision, not just the categories of data.  

Colorado As An Outlier

Colorado was the first state to enact a broad algorithmic discrimination law, drawing support from some larger tech companies, but concerns from Colorado-based tech firms and smaller startups over vague obligations and onerous compliance costs. Lawmakers held stakeholder meetings late last year and sought to assuage concerns with new legislation this year. However, these new proposed amendments still drew criticism from both consumer advocates and industry leaders.

“Industry got nearly all of the changes it wanted, while public interest groups got only a fraction of what we wanted,” said Matthew Scherer, a senior policy counsel at the Center for Democracy and Technology. However, Scherer also admits the bill preserves some of the core protections and “this might be the best we can get right now.”

Business advocates argued that the state is still an outlier in AI regulation, as no other state has yet to pass anything this comprehensive, and the federal government has taken a hands-off approach to regulating AI under the Trump Administration. "If passed, this bill will only exacerbate the damage to our reputation as a business-friendly state and our ability to continue to create jobs," said Bryan Leach, CEO and founder of Ibotta, a digital coupon firm.

The tension between wanting to set guardrails for the emerging technology versus the concern over scaring away industry and being left out of the AI revolution has been a common theme in AI regulation. Colorado lawmakers have about a week left to try to fix a law that most would like to see changed before it goes into effect. Trying to appease all stakeholders before the scheduled May 7 adjournment date will be a daunting challenge. 

Recent Developments

Major Policy Action  

  • Federal: The House approved the Take it Down Act (US S. 146), a measure to prohibit the nonconsensual publication of sexual deepfakes and authentic intimate depictions. The Senate unanimously approved the measure back in February. The bill would also require online platforms to remove such depictions within 48 hours. 

  • California: The California Privacy Protection Agency discussed revised regulations on automated decision-making technology at their board meeting on Thursday. The board has significantly narrowed the regulations in response to industry feedback, eliminating many opt-out provisions, and streamlining audit requirements. 

  • Kansas: Gov. Laura Kelly (D) signed a sexual deepfake bill (KS SB 186) into law, adding artificially generated visual depictions to criminal provisions relating to sexual exploitation.

  • Michigan: On Tuesday, a bill (MI HB 4047) to criminalize the sharing or creation of nonconsensual sexual deepfakes passed the House nearly unanimously. The House passed a similar measure last year, only to have it stall out in the Senate, where this bill now heads.

  • Montana: The Legislature passed a number of AI-related bills before adjourning this week. Among the bills headed to Gov. Greg Gianforte (R) are a digital replica bill (MT HB 513), a sexual deepfake bill (MT SB 413), a political deepfake bill (MT SB 25) and a bill (MT HB 178) that prohibits government use of AI for cognitive behavioral manipulation, discriminatory classification, malicious purposes, and public surveillance with some exceptions.  

  • Oklahoma: The Legislature passed a measure (OK HB 1364) to criminalize the dissemination of nonconsensual sexual deepfakes without consent or with the intent to harass, annoy, threaten, alarm, or cause harm. The bill passed both chambers unanimously and now heads to Gov. Kevin Stitt (R), who has five days to sign or veto the measure.

  • Tennessee: Gov. Bill Lee (R) signed a bill (TN SB 741) making it unlawful to create or possess technology designed to create sexual deepfakes of minors. Additionally, lawmakers passed the Preventing Deepfake Images Act (TN SB 1346) that would create a civil action for a person depicted in a sexual deepfake without consent, and add criminal provisions for the disclosure of a sexual deepfake with the intent to harm. 

  • Texas: The Senate passed a bill (TX SB 1964) that would create a framework for the ethical and transparent use of AI by state and local governments after it was amended in committee last week. Meanwhile, the House approved a political deepfake bill (TX HB 366) that would prohibit artificially modified political advertising without a disclosure.  

Notable Proposals 

  • California: The city of San Diego became the latest city to prohibit the use of artificial intelligence-driven software from being used to set rental housing prices. Berkeley, Minneapolis, Philadelphia, and San Francisco have already banned the use of such programs. We covered these and similar state-level bills earlier this year.  

  • Pennsylvania: A bipartisan group of senators has introduced a bill (PA SB 649) that would create a crime of digital forgery if a person knowingly or intentionally creates and distributes a fake digital likeness with the intent to defraud or cause harm.

Previous
Previous

Utah's Measured Approach to Mental Health AI

Next
Next

Montana is the First State to Guarantee Computational Freedom