State AI Legislation in 2024 Targets High-Risk Systems and Algorithmic Bias (AI Bill of Rights and the Biden Blueprint Framework)

Weekly Update, Vol. 11.

Key Takeaways

  • In 2024, state AI legislation is expanding beyond narrow use cases to embrace comprehensive AI regulation states are modeling after the Biden Administration's AI Bill of Rights framework, which establishes five core principles including safe systems, algorithmic discrimination protections, data privacy, notice requirements, and human alternatives.

  • New York's proposed bills (AB 8129/SB 8209) incorporate all five AI Bill of Rights framework principles and would require pre-deployment testing, ongoing monitoring, and give residents the right to opt out of AI systems in favor of human interaction.

  • Virginia HB 747 focuses on regulating high-risk AI systems regulation for tools that make consequential decisions affecting credit, criminal justice, education, employment, healthcare, housing, or insurance, requiring developers to implement risk management programs and disclose algorithmic discrimination protections measures.

  • Vermont HB 711 takes a similar approach to regulating "inherently dangerous" AI systems but notably includes a private right of action for consumers, allowing individuals to sue for violations.

  • The bills reveal two distinct regulatory approaches: hands-on frameworks with detailed developer requirements and enforcement mechanisms versus hands-off models that establish consumer rights without providing recourse for violations.

  • If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.


States have largely focused on regulating specific instances of AI use (e.g., sexual or political deepfakes) and largely held off on a more comprehensive regulatory action last year. The closest we've seen is the draft automated decision-making regulations in California that were sent back to the drawing board. But after studying the issue closely, lawmakers are prepared to expand the legislative debate in 2024 and take aim at AI more broadly. We expect to see the highly anticipated legislative framework from Connecticut lawmakers as they gavel into session next month, but lawmakers in New York, Virginia, Vermont, and Oklahoma introduced bills this week taking inspiration from President Biden's Blueprint for an AI Bill of Rights.

Biden's Blueprint for an AI Bill of Rights Framework

Last summer, the Biden Administration released a Blueprint for an AI Bill of Rights, which was intended to help guide the design, use, and deployment of AI systems to protect the public and establish five key principles:

  1. safe and effective systems;
  2. algorithmic discrimination protections;
  3. data privacy;
  4. notice and explanation; and
  5. human alternatives, consideration, and fallback (e.g., opt-outs).

As states consider AI legislation this year, we've noticed that many of the major comprehensive bills have taken up the Blueprint's five principles as a framework.

State AI Legislation Inspired by Federal Principles

New York's Comprehensive AI Rights Legislation

Lawmakers in New York introduced a pair of bills (NY AB 8129/SB 8209) that relied heavily on the Blueprint, clearly incorporating all five of the principles. The bills give New Yorkers several rights, including the right to be free from unsafe or ineffective AI systems (principle #1). To achieve this, the bill requires pre-deployment and ongoing testing of AI systems, proactive and continuous measures to protect from algorithmic discrimination, and independent evaluations of AI systems (principle #2). The bill seeks to protect residents from abusive data practices through built-in protections and design choices that include privacy protections by default (principle #3). Additionally, the bills would require notice to residents when an AI system is in use (principle #4), and be given the option to opt out of using AI systems in favor of interacting with a human (principle #5).

Virginia's High-Risk AI System Regulations

In Virginia, lawmakers introduced a bill (VA HB 747) to regulate generative and "high-risk" AI systems. This law leans heavily on principles 1 (safe and effective systems), 2 (algorithmic discrimination), and 4 (notice and explanation). Under the Virginia legislation, developers of high-risk AI systems would be required to provide a statement that includes the intended uses of the system, discloses the system's known limits, and lists any measures the developer has taken to mitigate the risk of algorithmic discrimination. Additionally, the bill would require a developer deploying a high-risk AI system to implement a risk management program to identify, mitigate, and document any risk of algorithmic discrimination. For generative AI systems, the bill requires steps to be taken to mitigate any reasonably foreseeable risks of generative AI systems and the completion of an impact assessment.

An important aspect of the legislation to flag is what AI systems would qualify as "high risk" under the Virginia bill, which is defined as any AI system specifically intended to autonomously make, or be a controlling factor in making, a "consequential decision," including any decision that has a material legal, or significantly similar, effect, on a consumer's access to credit, criminal justice, education, employment, healthcare, housing, or insurance.

Vermont's Inherently Dangerous AI Systems Bill

Lawmakers in Vermont have introduced a similar bill (VT HB 711) seeking to regulate "inherently dangerous" AI systems, which are defined as high-risk AI systems, dual-use foundational models, or generative AI systems. Under the bill, developers of inherently dangerous AI systems must submit safety and impact assessments and reliability testing results at regular intervals. Notably, the Vermont bill establishes a private right of action for consumers, an issue that has been hotly contested in comprehensive state privacy legislation and remains one of the key factors stalling the movement for a federal privacy law.

Oklahoma's Rights-Based Approach Without Enforcement

While Democratic trifecta New York has focused on an AI Bill of Rights-inspired bill with some teeth, the approaches in Virginia and Vermont are from lawmakers in states that are divided politically and thus will require bipartisan cooperation for enactment. On the other end, a Republican lawmaker in GOP-controlled Oklahoma has introduced legislation (OK HB 3453) spelling out consumer rights for AI systems, which similarly checks many of the Blueprint's principles — providing a right to know when AI is being used, a right to opt-out, a right to know your data is being used for an AI model, and right to know when contracts, documents, images, or text are generated by AI — but provides no recourse for violations of any of the rights established in the bill.

Two Distinct Approaches to AI Regulation Emerge

These bills illustrate two approaches to comprehensive AI regulation. The first is a hands-on approach, requiring monitoring of AI systems before and after their deployment and establishing a robust framework that gives developers and consumers some ground rules for the development, deployment, and use of AI systems. The second is a more hands-off approach that establishes rights, but without mandating detailed procedures for developers or providing recourse to consumers for violations. We'll be keeping a close eye on whether these approaches converge or diverge over the legislative session.

If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.

Previous
Previous

Governors Propose Major AI Investment Initiatives to Boost State Economies (Targeting Workforce Development and Research)

Next
Next

State AI Legislation Focuses on Disclosure and Transparency Mandates