State Laws on Nonconsensual Explicit Deepfakes: 2026 Legislative Tracker
Tracking State Legislation on AI-Generated Explicit Deepfake Images and Videos
As of 2026, over half of U.S. states have enacted laws criminalizing nonconsensual explicit deepfakes, with significant variation in penalties, civil remedies, and applicability to minors. This page tracks state-by-state legislative action for government affairs professionals, compliance teams, and policy researchers monitoring AI-related legislation across all 50 states.
Despite all the promise and benefits of AI technology, we're already seeing some of the real-world, negative impacts as AI is used to produce nonconsensual, explicit deepfake images and videos showing real individuals depicted in an explicit manner. These images often contain the face of an actual person on a naked or partially clothed body that is not their own and disproportionately targets women. Along with deepfakes aimed at electoral candidates, states are moving quickly to enact criminal and civil penalties targeting nonconsensual explicit deepfakes.
While generative AI chatbots are relatively new, nonconsensual explicit deepfakes date back to as early as 2017. As AI technology accelerates, the realistic quality of AI-generated images and videos continues to improve. Independent researchers found that in the first nine months of 2023, 113,000 deepfake images were uploaded to 35 different websites hosting explicit deepfake content — a 54% increase from all of 2022.
Unlike many AI-related issues, state policymakers have moved quickly. Most states already have statutes prohibiting the distribution of nonconsensual pornographic images, which legislators are amending to explicitly cover AI-generated deepfakes. Key policy questions vary by state: whether violations carry criminal penalties or civil remedies only, whether minors receive specific protections, and how "digitally altered" content is defined in statute.
Tracking all 50 states on AI legislation? See the MultiState AI Legislation Tracker for real-time monitoring of AI bills across all 50 states — including deepfakes, algorithmic accountability, generative AI regulation, and more.
In 2019, Virginia became the first state to do so (VA HB 2678) by adding nonconsensual explicit deepfakes to an existing "revenge porn" law. California (CA AB 602) also enacted an explicit deepfake law in 2019 and lawmakers in Hawaii (HI SB 309) and Georgia (GA SB 78) followed suit in 2021.
The trend accelerated in 2023. Illinois enacted legislation (IL HB 2123) establishing a civil cause of action for individuals whose image was used in an explicit deepfake without consent, and later that year signed (IL SB 382) into law, adding "digitally altered explicit image" to Illinois remedies for nonconsensual dissemination. Notably, the California and Illinois laws provide civil remedies but do not carry criminal penalties. In contrast, Texas (TX SB 1361), New York (NY SB 1042A), and Minnesota (MN HF 1370) enacted criminal offenses for deepfake violations. Louisiana (LA SB 175) and Texas (TX HB 2700) enacted laws specifically protecting minors from explicit deepfake depictions. In early 2024, South Dakota (SD SB 79) added computer-generated content to child pornography statutes.
In 2024, lawmakers in 22 states passed 29 laws addressing explicit deepfakes — a significant increase reflecting growing legislative urgency.
Use the map and tracker below for real-time tracking of state and federal legislation related to explicit deepfakes, sourced from MultiState's legislative tracking service.