States Rush to Criminalize AI-Powered Fraud

Key highlights this week:

  • We’re tracking 953 bills in 49 states related to AI during the 2025 legislative session.

  • After signing 3 AI bills into law last week, Utah Gov. Cox signed two more this week. 

  • Lawmakers in Montana and North Dakota make major progress on sexual deepfake bills. 

  • And the House has passed in political deepfake bill in Georgia

Image generators were some of the first AI models to make a big splash with consumers in 2022. Next came the more limited text-based chatbots that are ubiquitous today. But it wasn’t until last month that those two AI models came together when both Google and OpenAI released multimodal image generation into their flagship AI models. Previously, you had separate image-generating models and text-based models. When you asked a chatbot to make you an image, it would simply create its own text prompt and feed it into a separate image-generating AI model to produce an image. Relying on the less advanced image generator models created so-so images with some telltale signs of AI creation. 

However, with the recent release of multimodal image generation in GPT-4o and Gemini Flash, the frontier models can now create those images directly. The result is much higher-quality images that can be adjusted and edited with simple prompts (something the image-generating models struggled with previously). This is all a long-winded way of saying: AI images are about to get much better and they’ll be everywhere (see e.g., the Ghibli studio-style photo craze). This improving technology, when expanded to not just images, but to video and voice, is ripe for fraud. 

Scene: A finance worker receives a message from his employer’s CFO requesting a large, confidential transaction. The message seems suspicious, but the CFO sets up a video conference call. Upon recognizing the CFO, who is with other executives, the finance worker is satisfied with the legitimacy of the request and proceeds to make 15 transactions, depositing around $25 million into accounts at the direction of the CFO. 

It turns out the “CFO” was, in fact, a scammer who used deepfake technology to deceive the employee. The UK-based engineering firm Arup was one of the most visible early victims of fraudulent deepfakes, but they certainly won’t be the last. Deloitte’s Center for Financial Services estimates that generative AI-enabled fraud totaled $12.3 billion in losses in 2023 and could reach $40 billion by 2027. And it's not just large firms that are targeted. Many consumers can be duped by fraudulent schemes that appear to be endorsed by celebrities to lend legitimacy when, in reality, the endorsement is a deepfake. 

What have state lawmakers done to combat the increase in deepfake fraud? Most legislative efforts to regulate deepfakes have focused on political ads and sexual content. While fraud is already illegal, states are working to strengthen penalties and close loopholes to better prosecute deepfake-enabled crimes. Most simply, states can clarify that fraud conducted using AI is still fraud. Language in Utah’s AI Policy Act states that the use of an AI system is not a defense for violating the state’s current consumer protection laws.

New Hampshire was among the first states to criminalize deepfake fraud specifically with legislation (NH HB 1432) passed last summer. The law, which took effect on January 1, makes it a Class B felony to knowingly create, distribute, or present a deepfake of an identifiable individual with the intent to embarrass, harass, entrap, defame, extort, or otherwise cause financial or reputational harm to that person.

More recently, New Jersey Gov. Phil Murphy (D) signed a bill into law (NJ AB 3540) this week, which will make it a third-degree crime to create a deepfake to further the commission of a crime. The measure was supported by a high school student who was the victim of sexual deepfakes. The bill originally passed the legislature earlier this year but was conditionally vetoed over constitutional concerns, then amended to address those issues.

Last month, Virginia Gov. Glenn Youngkin (R) signed a similar measure (VA HB 2124), making it a Class 1 misdemeanor for any person to use any synthetic digital content for the purpose of committing any criminal offense involving fraud. The bill would also expand the applicability of provisions related to defamation, slander, and libel to include synthetic digital content. However, the law requires re-enactment by the General Assembly in 2026 before taking effect.

Twelve other states have introduced legislation this session to address fraudulent deepfakes. The Arizona Senate has passed a bill (AZ SB 1295) that would criminalize the use of a computer-generated voice recording, image, or video of another person with the intent to defraud or harass others. Texas lawmakers have introduced a flurry of AI-related legislation, including a measure (TX SB 2373) to create civil liabilities for disseminating deepfakes for the purpose of financial exploitation. An Ohio bill introduced this week (OH SB 163) would make it unlawful to create a digital replica of another to induce a person to make a financial decision or extend credit based on the replica, or to use the digital replica to damage a person's reputation.

As technology continues to evolve, new legal frameworks are necessary to combat scammers.  A few states have taken early steps to protect against fraud, but ultimately, a federal law may be needed to combat scammers from around the world. In the meantime, businesses and consumers should be vigilant about the potential for deepfake-enabled fraud.

Recent Developments

Major Policy Action 

  • Georgia: The House overwhelmingly approved legislation (GA SB 9) that would regulate the use of deepfakes in political campaigns, requiring a disclosure for such ads within 90 days of an election. The bill heads back to the Senate, which must approve changes made in the House.

  • Montana: The legislature approved a bill (MT HB 82) that would add computer-generated child pornography to provisions prohibiting child pornography and sexual abuse. If signed by the governor, Montana would join 20 other states that have enacted similar laws. Lawmakers also approved a bill (MT SB 212) regulating the use of artificial intelligence to control critical infrastructure.

  • North Dakota: The Senate passed two deepfake bills, one (ND HB 1351) that would create a civil action for harms from the creation or distribution of sexual images, even if computer-generated. Another bill (ND HB 1386) would add computer-generated images to child pornography laws. Both bills will be sent back to the House to approve changes before being sent to Gov. Kelly Armstrong (R).

  • Utah: Gov. Spencer Cox (R) signed two more AI-related bills into law last week. UT SB 226 would amend existing AI laws by requiring that disclosures of generative AI use by regulated occupations need only be made for “high-risk” interactions.  The bill, which goes into effect on May 7, also requires disclosures by a solicitor using generative AI when asked by a consumer and would apply consumer protection laws to an individual using generative AI. Gov. Cox also signed UT SB 271 into law, protecting digital replicas from unauthorized use.

Notable Proposals 

  • California:  Last week CA SB 813 was gutted and amended with language that would direct the Attorney General to designate entities as “multistakeholder regulatory organizations” to audit and certify artificial intelligence models and applications. The bill would create an affirmative defense for models certified by such an organization. 

  • New York: Sen. Andrew Gounardes (D) introduced three bills last week related to AI, all of which are companions to Assembly bills. SB 6953 would regulate frontier models, SB 6954 would require generative AI models to apply provenance data, and SB 6955 would require disclosure of datasets used to train publicly available models. 

  • Pennsylvania: On Monday, Rep. Steven Malagari (D) introduced a bill (PA HB 1063) that would prohibit the use of "Grinch Bots" to buy internet products in bulk for resale and not for personal use. The measure is aimed at part on ticket scalpers, and there is a similar bill in the Senate (PA SB 355) with bipartisan support.

Previous
Previous

California Agency Retreats on Bold AI Regulation Plans