State AI Legislation: Six Key Policy Areas Taking Shape This Year
Key Takeaways
In 2023, over 160 state AI legislation bills were introduced across the country, though most failed to advance past committee stages as lawmakers work to balance innovation with consumer protection.
Facial recognition technology laws are evolving as states like Montana and California move to restrict law enforcement use, while others focus on requiring retailers to disclose their use of the technology to customers.
AI bias protections states are pursuing include New York City's hiring audit requirements and proposed legislation in California, Massachusetts, and other states to prevent algorithms from discriminating based on protected classes in housing, employment, and credit decisions.
Deepfake regulation legislation is emerging in three areas: nonconsensual sexual content (with Minnesota and Illinois creating penalties), political advertising (Washington and Minnesota restricting election interference), and general fraud prevention requiring disclosure of AI-generated content.
State AI policy bills also address social media algorithms, government AI inventories, and the creation of study groups, with Connecticut and Texas requiring agencies to catalog their AI systems and multiple states forming task forces to develop regulatory recommendations.
If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.
The purpose of multistate.ai is to provide timely updates and deep dives on how state and local governments are regulating artificial intelligence (AI) technologies. But first, it's useful to take a step back and analyze the big picture of what legislative actions state lawmakers have introduced related to AI so far this year. Despite President Biden's recent AI Executive Order, we anticipate state legislatures will be the primary battleground for substantive AI regulation in 2024 and beyond.
States have only begun to dip their toes into regulating AI, which will be a bipartisan affair. State Senator Alberts (R), while co-chairing a hearing on AI in Georgia last week, said "We do not want to stifle innovation here. But we want to establish guardrails to protect Georgians."
State AI Legislation Overview
In total, state lawmakers have introduced over 160 bills related to AI this year. Notably, the vast majority of this legislation failed to move past the committee stage of the legislative process. While a few of these bills focused on generative AI models, many of these were intended to bring more attention to the emerging issue, like Rep. Jake Auchincloss's bill in Massachusetts that was drafted using ChatGPT. Most of the bills considered this year can be organized into six categories:
- Facial Recognition Technology
- Protections against biases
- Deepfakes
- Social Media Regulation
- AI Use in Government
- Study Groups
Facial Recognition Technology Regulations
Facial recognition technology takes a captured facial image and uses AI to match the image with others in its database to identify a person. The technology has proved to be very useful to unlock devices, verify identification at banks, and identify criminals. But civil liberties advocates have raised concerns over the infringement of privacy, issues of informed consent, the protection of data collected, and potential bias after studies showed the technology is less accurate for cases involving women and people of color.
Law Enforcement Use and Restrictions
In 2019, San Francisco became the first major city to ban facial recognition technology by the police and government agencies, and many cities followed suit, as well as the states of Vermont and Virginia. But as crime rose, many cities began to claw back those prohibitions. Virginia lifted its ban, instead regulating the use of the technology by law enforcement, and Vermont made an exception for cases involving children.
Nonetheless, concerns persist, and Montana passed a law this year prohibiting continuous facial surveillance by law enforcement, and limiting the use to only certain crimes where a warrant was obtained. California lawmakers are considering a bill that would prohibit the use of facial recognition technology on video captured from a police body camera.
Private Sector and Retail Applications
There have even been a few bills introduced that would apply to the private sector, particularly retailers. Bills in Connecticut and Texas would require disclosure to customers of the use of facial recognition technology, while a bill in New Jersey would have banned retailers from using the technology except for a "legitimate safety purpose."
AI Bias and Discrimination Protections
Facial recognition technology has had accuracy issues with women and people of color in part due to the systems being trained on datasets largely derived from white males. But systematic biases can affect other artificial intelligence tools. Algorithms and AI tools are being used to determine eligibility for housing, employment, health insurance coverage, and creditworthiness, but some tools may have a disparate impact on certain communities.
Policymakers have sought to protect these communities against AI biases. New York City recently passed a law regulating the use of AI in hiring and requiring regular bias audits. Other cities and states have contemplated similar proposals. Illinois lawmakers considered a bill to prohibit predictive data analytics in employment decisions from considering an applicant's race or using zip code as a proxy for race. Bills in California, Massachusetts, New York, Rhode Island, and the District of Columbia would prohibit algorithms from discriminating based on protected classes and, in some cases, require an impact assessment to determine the risk of discrimination.
Deepfake Legislation and Regulations
New AI tools have enabled users to prompt remarkable works of art from simple lines of text. Unfortunately, this also allows anyone to create completely fake images and even videos of events that never happened. There were 36 bills introduced this year to regulate what are known as "deepfakes," falling into three categories.
Nonconsensual Sexual Content and Revenge Porn
First, states considered legislation that provided civil and criminal penalties for the dissemination of nonconsensual digitally altered sexual images. Minnesota created a cause of action for victims of digitally altered sexual content and criminal punishments of up to three years in prison. Illinois passed a bill that amended their existing "revenge porn" law to include digitally altered images. Texas passed a package of child online safety bills that included measures to prohibit AI-altered digital images of minors, and Louisiana also made it a crime to disseminate sexually explicit deepfake content involving minors.
Political Campaign and Election Deepfakes
States are also addressing deepfakes in the political space. Political campaigns have begun to use AI, and while the Federal Election Commission may move towards regulating the use of AI, states will still be left to determine what regulation, if any, there will be for AI in state campaigns. The state of Washington has already taken action, providing injunctive and equitable relief for a political candidate who has had their appearance, action, or speech synthetically altered in electioneering communication. Minnesota's deepfake bill also includes criminal penalties for using deepfake content to influence an election with a prison term of up to five years. However, it is unclear whether these laws would pass constitutional muster under the First Amendment. Michigan lawmakers have introduced a package of bills requiring disclosures for the use of AI in political advertising.
General Fraud and Impersonation Protections
Finally, lawmakers may seek to protect against other fraudulent uses of deepfake technology. A bill in Pennsylvania would make it a first-degree misdemeanor for a person to disseminate an AI-generated impersonation of an individual without consent. Another bill in the Keystone State would require all AI-generated content to be disclosed as such. As AI-generated content continues to proliferate, lawmakers will feel more pressure to regulate its use to ensure the public is protected.
Social Media Algorithm Transparency and Child Safety
Many lawmakers targeted social media platforms this session, with many conservatives accusing platforms of censoring, deplatforming, or banning users based on political views. Platforms often use algorithms to prioritize certain content to different users, and conservatives have accused liberal-leaning developers of deprioritizing conservative content, despite studies that show little bias. Bills in Hawaii, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, and West Virginia would have required disclosures about how algorithms prioritize content, and in some cases, allowed users to opt out of algorithmic recommendations.
Social media algorithms have also been targeted for the content they are promoting to minors who use their platforms. A North Carolina bill would have required certain informed consent to platform users about the algorithmic recommendations and prohibited data from minors to be used in such recommendations. A California bill would prohibit algorithmic features that cause children to be more addicted to the platform.
Government AI Accountability and Oversight
States have also looked inward in dealing with AI, with many bills calling for a full accounting of the use of AI by the state. Connecticut passed a bill requiring a full inventory of AI use by state agencies, with procedures and policies governing the use and procurement of AI systems. Texas passed a similar bill, creating an Artificial Intelligence Advisory Council to study AI systems used by state agencies, and requiring agencies to produce an automated decisions systems inventory report by July 1, 2024.
AI Task Forces and Study Committees
Finally, many lawmakers are still trying to fully understand AI, its benefits, and potential harms. To catch up to speed, many have created working groups to study the issue. Illinois lawmakers established the Generative AI and Natural Language Processing Task Force to report on generative artificial intelligence software and natural language processing software. Wisconsin Governor Tony Evers (D) issued an executive order creating a Task Force on Workforce and Artificial Intelligence. Connecticut passed a measure to create a working group on the use of AI in state government and Senator James Maroney (D) has organized a multi-state working group of lawmakers to study the issue. Several interim legislative committees are also studying the issue in anticipation of next year's session. For a full listing of these study committees, task forces, and working groups dedicated to hearing from experts and stakeholders with the goal of developing recommendations on how best to regulate AI at this stage in the technology's development, view our tracker here.
If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.