States Broaden the Scope of AI Regulation

States have largely focused on regulating specific instances of AI use (e.g., sexual or political deepfakes) and largely held off on a more comprehensive regulatory action last year. The closest we’ve seen is the draft automated decision-making regulations in California that were sent back to the drawing board. But after studying the issue closely, lawmakers are prepared to expand the legislative debate in 2024 and take aim at AI more broadly. We expect to see the highly anticipated legislative framework from Connecticut lawmakers as they gavel into session next month, but lawmakers in New York, Virginia, Vermont, and Oklahoma introduced bills this week taking inspiration from President Biden’s Blueprint for an AI Bill of Rights.

Last summer, the Biden Administration released a Blueprint for an AI Bill of Rights, which was intended to help guide the design, use, and deployment of AI systems to protect the public and establish five key principles:

  1. safe and effective systems;

  2. algorithmic discrimination protections; 

  3. data privacy;

  4. notice and explanation; and

  5. human alternatives, consideration, and fallback (e.g., opt-outs).

As states consider AI legislation this year, we’ve noticed that many of the major comprehensive bills have taken up the Blueprint’s five principles as a framework. 

Lawmakers in New York introduced a pair of bills (NY AB 8129/SB 8209) that relied heavily on the Blueprint, clearly incorporating all five of the principles. The bills give New Yorkers several rights, including the right to be free from unsafe or ineffective AI systems (principle #1). To achieve this, the bill requires pre-deployment and ongoing testing of AI systems, proactive and continuous measures to protect from algorithmic discrimination, and independent evaluations of AI systems (principle #2). The bill seeks to protect residents from abusive data practices through built-in protections and design choices that include privacy protections by default (principle #3). Additionally, the bills would require notice to residents when an AI system is in use (principle #4), and be given the option to opt out of using AI systems in favor of interacting with a human (principle #5).  

In Virginia, lawmakers introduced a bill (VA HB 747) to regulate generative and “high-risk” AI systems. This law leans heavily on principles 1 (safe and effective systems), 2 (algorithmic discrimination), and 4 (notice and explanation). Under the Virginia legislation, developers of high-risk AI systems would be required to provide a statement that includes the intended uses of the system, discloses the system's known limits, and lists any measures the developer has taken to mitigate the risk of algorithmic discrimination. Additionally, the bill would require a developer deploying a high-risk AI system to implement a risk management program to identify, mitigate, and document any risk of algorithmic discrimination. For generative AI systems, the bill requires steps to be taken to mitigate any reasonably foreseeable risks of generative AI systems and the completion of an impact assessment. 

An important aspect of the legislation to flag is what AI systems would qualify as “high risk” under the Virginia bill, which is defined as any AI system specifically intended to autonomously make, or be a controlling factor in making, a “consequential decision,” including any decision that has a material legal, or significantly similar, effect, on a consumer’s access to credit, criminal justice, education, employment, healthcare, housing, or insurance. 

Lawmakers in Vermont have introduced a similar bill (VT HB 711) seeking to regulate “inherently dangerous” AI systems, which are defined as high-risk AI systems, dual-use foundational models, or generative AI systems. Under the bill, developers of inherently dangerous AI systems must submit safety and impact assessments and reliability testing results at regular intervals. Notably, the Vermont bill establishes a private right of action for consumers, an issue that has been hotly contested in comprehensive state privacy legislation and remains one of the key factors stalling the movement for a federal privacy law. 

While Democratic trifecta New York has focused on an AI Bill of Rights-inspired bill with some teeth, the approaches in Virginia and Vermont are from lawmakers in states that are divided politically and thus will require bipartisan cooperation for enactment. On the other end, a Republican lawmaker in GOP-controlled Oklahoma has introduced legislation (OK HB 3453) spelling out consumer rights for AI systems, which similarly checks many of the Blueprint’s principles — providing a right to know when AI is being used, a right to opt-out, a right to know your data is being used for an AI model, and right to know when contracts, documents, images, or text are generated by AI — but provides no recourse for violations of any of the rights established in the bill. 

These bills illustrate two approaches to comprehensive AI regulation. The first is a hands-on approach, requiring monitoring of AI systems before and after their deployment and establishing a robust framework that gives developers and consumers some ground rules for the development, deployment, and use of AI systems. The second is a more hands-off approach that establishes rights, but without mandating detailed procedures for developers or providing recourse to consumers for violations. We’ll be keeping a close eye on whether these approaches converge or diverge over the legislative session. 


Recent Policy Developments

  • Political Deepfakes: The 2024 presidential election saw its first major deepfake incident after voters in New Hampshire received phone calls on Tuesday that used AI generation technology to impersonate the voice of President Biden and urged them not to vote in that day’s primary election. We anticipated this issue last year, when 5 states had enacted restrictions around electoral deepfakes, and state lawmakers have already introduced 47 bills on this issue in the 2024 legislative sessions. Bookmark our deefakes in electoral campaigns resource page to keep up with the latest news on this issue. 

  • California: The city of San Francisco filed a lawsuit against a state commission that allowed autonomous vehicles (AVs) to expand operations in the city. It’s the latest move in a brewing feud between state agencies, localities, and AV operators. AVs have been less of a focus for AI policy since the emergence of LLMs, but the technology has progressed significantly since it was originally hyped around a decade ago. We examine the current state of play for AV regulations here

  • Connecticut: The Artificial Intelligence Task Force met on Tuesday with plans to meet next week to finalize a list of items to put up for a vote. The group, chaired by Senator James Maroney (D), plans to propose legislation for the upcoming session to enhance transparency and accountability, criminalize AI-generated porn, and provide AI-training programs. 

  • Delaware: Lawmakers have proposed a bill (DE HB 333) creating an Artificial Intelligence Commission to conduct an inventory of AI use in state agencies and develop policies for safe use. Bill sponsor Rep. Krista Griffith (D) adds that subcommittees could be created that study AI usage in industry with guidelines to protect individual rights.

  • Ohio: Attorney General Dave Yost (R) proposed a bill (OH SB 217) that would require watermarks for all AI-generated content and prohibit using a replica of someone’s persona for malicious purposes. The bill also adds AI-generated content to child pornography laws and requires platforms, websites, and internet providers to pull child pornography when notified by the attorney general. 

  • Virginia: Governor Youngkin (R) signed an executive order last week that directs the Virginia Information Technologies Agency to publish AI Policy Standards for use by state agencies and its suppliers. The order would also establish education guidelines to support K-12 schools and postsecondary institutions, require the development of law enforcement standards, and establish an AI task force.

  • Washington: A bill (WA HB 1951) introduced this month that would require impact assessments and prohibit discrimination is facing opposition from both industry and civil rights groups. Industry groups argue that existing laws already prohibit discrimination, while civil rights groups argue the protections do not go far enough and should include opt-outs for consumers, and the assessments are not required to be disclosed.

Previous
Previous

States Pursue AI for Economic and Workforce Development 

Next
Next

Transparency in the Age of AI: The Role of Mandatory Disclosures