States Address the Alarming Proliferation of Nonconsensual Sexual Deepfakes

Despite all the promise and benefits of AI technology, we’re already seeing some of the real-world, negative impacts as AI is used to produce nonconsensual, sexual deepfake images and videos showing real individuals depicted in a sexually explicit manner. These images often contain the face of an actual person on a naked or partially clothed body that is not their own and disproportionately targets women. Along with deepfakes aimed at electoral candidates, states are moving quickly to combat the alarming proliferation of nonconsensual sexual deepfakes.

While generative AI chatbots are relatively new, nonconsensual sexual deepfakes date back to as early as 2017. As AI technology accelerates, the realistic quality of AI-generated images and videos continues to improve, producing images that can easily be confused for a real person. The volume of deepfakes is also accelerating. Independent researchers found that in the first nine months of this year, 113,000 deefake images were uploaded to 35 different websites that either exclusively or partially host sexual deepfake videos — a 54% increase from all of 2022. Additionally, the research conducted on sexual deepfakes only includes images found on public websites, there are likely far more images exchanged via text messages and messaging apps that researchers are unable to account for. 

The primary targets of the sexual deepfake images have been celebrities and social media influencers. However, we’ve now seen alarming recent incidents where nonconsensual sexual deepfakes were created of female high school students and circulated among the student body at their schools. Parents, school officials, and lawmakers expressed outrage at the incidents and called for laws to address sexual deepfakes. 

Unlike many AI-related issues, state policymakers have quickly moved to address the rise of nonconsensual sexual deepfakes. Most states already have statutes prohibiting the sale or distribution of nonconsensual pornographic images, which lawmakers can amend to explicitly include deepfakes. In total, nine states have enacted legislation directly targeting sexual deepfakes. Some argue that current laws addressing the transmission of nonconsensual pornographic images would be broad enough to cover deepfakes in many states, but lawmakers want to update those statutes to ensure their inclusion. 

In 2019, Virginia became the first state to do so (VA HB 2678) by adding nonconsensual sexual deepfakes to an existing “revenge porn” law. California (CA AB 602) also enacted a sexual deepfake law in 2019 and lawmakers in Hawaii (HI SB 309) and Georgia (GA SB 78) followed suit in 2021. 

The trend continued this year. Illinois enacted legislation (IL HB 2123) establishing a cause of action for individuals who had their image used in a sexual deepfake without their consent and last week the governor signed another bill (IL SB 382) into law, which adds the term “digitally altered sexual image” to the Illinois Remedies for Nonconsensual Dissemination of Private Sexual Images. Notably, the laws in California and Illinois give victims the ability to file lawsuits against perpetrators but do not carry criminal penalties. In contrast, Texas (TX SB 1361), New York (NY SB 1042A), and Minnesota (MN HF 1370) enacted legislation this year adding criminal offenses to deepfake laws. Additionally, Louisiana (LA SB 175) and Texas (TX HB 2700) enacted laws specifically prohibiting minors from being depicted in any sexually explicit deepfake image. 

We expect this trend to continue into 2024 as lawmakers build protections for victims whose images are used in sexually explicit deepfakes. Already, Ohio, California, and New Hampshire are set to consider bills banning nonconsensual sexual deepfakes in 2024. And Oklahoma will consider a bill amending its current laws to state that child pornography includes AI-generated images showing a child in a sexually explicit manner. 


Recent Policy Developments

  • California: Last Friday, the California Privacy Protection Agency (CPPA) held a public board meeting where board members criticized the CPPA draft text of automated decision making technology regulations as so broad that it could cover essentially any technology. As a result, the CPPA Board directed staff to prepare revised drafts that take into account the feedback from board members. The Board is expected to meet again early next year. 

  • Florida: On Tuesday, lawmakers prefiled a bill for next session that would require disclosures for political advertising that depicts a real person performing an act that did not occur that was created in whole or in part by generative AI. Michigan recently became the fifth state to enact a political deepfake law. Keep track of state laws regulating the use of deepfakes in electoral campaigns with our dedicated issue page on multistate.ai. 

  • New Hampshire: The House will consider a trio of bills that were prefiled for next year’s session relating to deepfakes. One prefiled bill would make it a crime to send nonconsensual sexual deepfakes, another would prohibit deepfakes in political advertising and create civil and criminal penalties for sending deepfakes to cause financial or reputational harm and, and a third bill would require disclosures for political ads with misleading deepfakes.

  • Washington: A proposed bill for next session would establish an artificial intelligence task force to make recommendations to the legislature on standards for the regulation and use of generative AI. The bill is sponsored by Senator Joe Nguyen (D), who chairs the Senate Environment, Energy & Technology Committee and held a work session on AI on  December 1. The Attorney General’s office proposed the bill and the House companion

  • Federal: On Tuesday, the Government Accountability Office (GAO) published a report reviewing artificial intelligence use by federal agencies. The report says that federal agencies have reported about 1,200 current and planned AI use cases, with NASA reporting the most of any agency (375 use cases). GOA’s analysis identified instances of “incomplete and inaccurate data” and provided recommendations to those agencies to fix these issues.

Previous
Previous

Balancing Act: What to Expect on State AI Policy in 2024

Next
Next

Lessons from Regulating Facial Recognition Technology