States Go to School on Artificial Intelligence

After years of hype, AI technologies have burst on the scene with consumer products like AI chatbots (e.g., ChatGPT, Bing AI, Google Bard) and image generators (e.g., Dall-E, Midjourney). As AI advances at a record pace, policymakers (like the rest of us) are trying to wrap their heads around what this technology could mean for our future. 

From a public policy standpoint, we’re solidly in the education phase of AI regulation. We’re all scrambling to get up to speed on this emerging technology. For policymakers, this means study committees, task forces, and working groups dedicated to hearing from experts and stakeholders with the goal of developing recommendations on how best to regulate AI at this stage in the technology’s development. 

And that’s largely what we’ve seen this year as lawmakers gear up for a busy 2024 legislative session. You’ll see in the map below that eight states have created a dedicated group tasked with studying AI, while another 11 states have asked a standing committee or state agency to take the lead in studying this emerging technology. Additionally, state lawmakers have come together through organizations like NCSL to create intrastate conversations and are even organizing such groups on their own. AI will be on every upcoming meeting agenda. 

A handful of these groups have already started to come together. So what questions are they asking? Those questions and what they’re hearing from experts could give us a big clue about how policymakers plan to regulate AI in the near term.

In July, lawmakers in a New Mexico committee heard how AI could enable the deceptive spreading of misinformation to negatively affect people and institutions. Echoing many initial reactions, lawmakers in Albuquerque called the rise of AI technology “scary” and exclaimed that “this isn’t science fiction anymore.” Testimony at the hearing emphasized that legislation should focus on transparency and disclosure requirements for AI companies and that lawmakers had an opportunity to hold those that create and spread misinformation accountable, specifically those making false political claims during an election campaign. 

An initial focus on transparency and disclosures is a logical starting point for lawmakers. Since we’re still in the early stages of both the technological development, commercial deployment, and regulatory oversight of AI, transparency and disclosure could provide a fuller understanding on how AI is being used and who it's affecting. One example of this type of legislation from this year’s legislative sessions is California AB 331, which would mandate annual impact assessments and require notification of any person subject to a “consequential” decision by an AI tool. And a key aspect of this bill is that it includes a private right of action, which allows private residents of California to sue developers whose AI tools contribute to “algorithmic discrimination.” 

But this is only one of the many avenues policymakers will explore when regulating AI. Other examples include weeding out potential biases in AI systems; combating misinformation, deepfakes, spam, and fraud; consumer protection and public privacy; and combining all algorithmic software into the AI bucket. Our team at multistate.ai will explore each of these potential regulatory avenues and individual bill language debated around the country. Please sign up here for the multistate.us update, delivered directly to your inbox. 

Recent Policy Developments

Previous
Previous

How to Define AI?