Congress Follows the States in Criminalizing Sexual Deepfakes

Key highlights this week:

  • We’re tracking 1,023 bills in all 50 states related to AI during the 2025 legislative session.

  • The Texas Senate approved a pared-back version of a comprehensive AI regulatory measure, which is primarily aimed at regulating government use of AI. 

  • All eyes are on the host of AI bills moving through the California legislature before next Friday’s crossover deadline for a bill to pass its chamber of origin. 

  • And Congress has followed the states’ lead in enacting a sexual deepfake bill signed into law by President Trump, which is the topic of this week’s deep dive. 

Last week, Google upped the generative AI game by releasing Veo 3, a model that creates short (8-second) videos with sounds based on your prompt. The key addition here is adding voice to the video, which spurred a round of viral creativity of social media users (see, e.g., Prompt Theory). As image, video, and voice generation technology progresses, more emphasis will be placed on potential abuses, namely deepfakes. State lawmakers have already responded by enacting over 100 laws related to deepfakes — by far the most popular AI-related regulation across the country. And this month, Congress joined the party when President Trump signed the TAKE IT DOWN Act into federal law. 

Of the 116 laws enacted at the state level related to deepfakes, these mostly divide up into three main buckets: sexual deepfakes, political deepfakes, and fraud. The TAKE IT DOWN Act (US S. 146), signed into law by President Trump on May 19, 2025, falls into the sexual deepfake category. The law, which stands for “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act,” has two primary purposes: (1) criminalizing the publication of nonconsensual sexual deepfakes and (2) requiring online platforms to establish a notice and takedown process when a nonconsensual sexual deepfake is reported.

Criminalizing Non-Consensual Sexual Deepfakes 

The TAKE IT DOWN Act’s first main requirement, criminalizing the publication of nonconsensual sexual deepfakes, is paralleled by similar laws enacted in 31 states. The new federal law, which went into effect immediately, makes it a crime to knowingly publishing nonconsensual intimate visual depictions of adults when the depiction was obtained under circumstances where privacy was expected, was not voluntarily exposed in public/commercial settings, is not a matter of public concern, and publication was intended to cause harm or does cause psychological, financial, or reputational harm. Violators of the law face up to two years of imprisonment. 

The law also criminalizes knowingly publishing intimate visual depictions of minors with the intent to abuse, humiliate, harass, degrade, or sexually gratify, with penalties up to three years imprisonment. This follows 26 states that have passed laws prohibiting deepfakes of child sexual abuse material (CSAM). 

Finally, the federal law criminalizes intentionally threatening to publish such depictions "for the purpose of intimidation, coercion, extortion, or to create mental distress" — commonly known as “revenge porn” — with penalties up to 18 months for adult victims and 30 months for minor victims.

Notably, these provisions include an exception that allows a person to possess or publish a deepfake of themselves engaged in nudity or sexually explicit conduct. And that person can share the deepfake of themselves with other individuals, but "the fact that the identifiable individual disclosed the intimate visual depiction to another individual shall not establish that the identifiable individual provided consent" for its publication.


Platforms Must Implement Notice & Takedown Process

The second major requirement under the TAKE IT DOWN Act is for “covered platforms” to establish a notice and takedown process within one year, allowing individuals to request removal of nonconsensual intimate visual depictions within 48 hours of receiving valid requests. The Federal Trade Commission (FTC) would enforce this mandate upon platforms. While criminal penalties of the act outlined above go into effect immediately, the law’s notice and takedown requirements on platforms will not go into effect until May 19, 2026 (one year after enactment of the law). 

A covered platform is defined as “a website, online service, online application, or mobile application” that serves the public and either “primarily provides a forum for user-generated content” or publishes, curates, hosts, or makes available content of nonconsensual intimate visual depictions in the regular course of trade or business. The definition includes exceptions for broadband ISPs, email services, or websites or apps that are primarily display pre-selected content (e.g., news sites, streaming services, or online magazines) as long as an interaction feature (e.g., comment section) is a minor add-on to the main content of the site. 

In order to submit a removal request, the request must include a physical or electronic signature, sufficient information to locate the depiction, a good-faith statement that the depiction is nonconsensual, and contact information for the requester. The platform would then have 48 hours after receiving the request to remove the reported depictions and make reasonable efforts to identify and remove identical copies.

While the first major requirement of the TAKE IT DOWN Act to criminalize the publication of nonconsensual sexual deepfakes has many parallel state laws, notice and takedown requirements are less prevalent. Last year, California Gov. Newsom (D) signed into law (CA SB 981) requirements for social media platforms to provide reporting mechanisms for California residents to report sexually explicit deepfake content and mandates platforms remove such content after determining it was created without consent. Lawmakers have debated similar bills in several other states (e.g., TX HB 3133, FL HB 1161, PA SB 568, MS SB 2437).


State Deepfake Laws

Already this year, state lawmakers have enacted 38 laws related to deepfakes. Last year, lawmakers in 27 states enacted 58 laws on deepfakes. And in 2023, states enacted 14 deepfake laws. These laws date all the way back to 2019, when Virginia (VA HB 2678) and California (CA AB 602) added nonconsensual sexual deepfakes to existing “revenge porn” laws. That same year, California (CA AB 730) and Texas (TX SB 751) enacted laws to prohibit the use of deepfakes to influence political campaigns. In 2021, Hawaii and Georgia enacted sexual deepfake laws, before the trend started taking off in 2023. Today, states have enacted a total of 116 deepfake-related laws, a major portion of which deal with sexual deepfakes. 

This is a classic case of Congress building on the work of state lawmakers to build a blueprint for a national law on a broadly agreed-upon subject to address potential abuses of generative AI technology. In what other respects will Congress follow the states’ lead on regulating AI? 

Recent Developments

Major Policy Action  

  • Texas: The Senate approved TX HB 149, a pared-back version of a comprehensive AI regulatory measure. The proposal focuses on prohibited use cases of AI, such as discrimination, social scoring, encouraging harm, or sexual deepfakes. The bill goes back to the House and needs approval of Senate changes before it heads to Gov. Greg Abbott (R).

  • California: Several bills failed to move out of the “suspense file” on Friday, effectively blocking them for this session, including CA AB 1221 on workplace surveillance. However, several other bills were passed out of the Appropriations Committee to survive, including a bill (CA AB 1018) to regulate automated decision systems, although we are hearing it will be combined with other AI bills later this summer. Keep an eye on California bills as the crossover deadline for bills to pass their legislative chamber of origin approaches next Friday (June 6). 

  • Illinois: Both chambers approved a measure (IL HB 3178) to apply the Digital Voice and Likeness Protection Act only to agreements for new performances after January 1, 2026, and clarify when agreements are enforceable. The law was originally to apply to agreements after the effective date of August 9, 2024.

  • Kentucky: General Assembly leaders announced five task forces for the interim session, including a return of the Artificial Intelligence Task Force. Last year, the task force released a report with 11 legislative recommendations. This year, the legislature passed KY SB 4, which banned political deepfakes and regulated state government use of AI.  

  • Nevada: The legislature approved and enrolled a bill (NV AB 325) that would prohibit a public utility from using artificial intelligence to make a final decision regarding whether to reduce or shut down utility service in response to a disaster or emergency.

  • New Hampshire: A House committee approved a bill (NH SB 263) regulating AI programs and chatbots when used by children. The committee had earlier amended the bill to strip a private right of action, leaving enforcement to the attorney general, and provided a 90-day right to cure.

  • New York: The Senate approved a bill (NY S 933) to establish the Office of Artificial Intelligence to develop AI policies and governance for state use. Another bill approved by the Senate last week (NY S 3699) would create a task force on facial recognition technology. 

Notable Proposals 

  • New York: Assemblymember Steven Otis (D) introduced a bill last week (NY A 8595) that would require generative AI developers to publish on their websites detailed information about any video, audio, text, or data from covered publications used to train their models. Otis chairs the Assembly Committee on Science and Technology and has expressed support for a package of AI legislation sitting in the New York Legislature.

  • Pennsylvania: Rep. Rick Krajewski (D) introduced a bill (PA HB 140) that would prohibit the use of algorithmic rent-setting software to determine rent amounts. 

Next
Next

Connecticut’s AI Bill Sheds Major Mandates