Deepfakes in Electoral Campaigns

State Lawmakers Address A Direct Threat from AI: Their Own Electoral Campaigns

As lawmakers contemplate their reelection campaigns, a direct concern of theirs, a concern shared by industry leaders, is AI’s use to influence future election results. AI, in the form of “deepfake” images, audio, and video could be used to manipulate images and recordings to change a candidate’s speech into something they didn’t say, alter a candidate's movement in an embarrassing manner, or doctor images to perpetuate false narratives. And in today’s viral social media environment, an order of magnitude more voters will view and share these deepfakes than will see any fact-check posts revealing the truth.

Deceptive media produced by AI to influence elections may originate from a variety of sources — individual campaigns, outside groups seeking to oppose or support a candidate or issue, or foreign adversaries seeking to influence the outcome of an election. It’s unclear at this point what type of impact deceptive AI deepfakes could have on elections, but this potential impact is being taken especially seriously given social media’s influence on recent elections. A repeated theme we’ve seen from policymakers looking to regulate AI is the regret that lawmakers failed to act on social media and that they won’t make the same mistake with AI.

And we’re already seeing the early effect of media distortions on elections. In 2020, a manipulated video of former House Speaker Nancy Pelosi went viral, which artificially slowed down her speech and made it appear slurred, falsely indicating that she was intoxicated during a press conference. This summer, a PAC supporting Florida Governor DeSantis released an ad using AI-generated audio of former President Donald Trump attacking Iowa Governor Reynolds. While the audio was based on posts Trump had made on social media, it did not reflect words that were actually spoken by the former president. In a runoff election for president in Argentina this month, AI-generated images of candidates are circulating, both positive and negative, as well as manipulated videos of candidates making statements that were not uttered by the candidate. And on Jan. 23, 2024, voters in New Hampshire received phone calls that used AI generation technology to impersonate the voice of President Biden and urged them not to vote in that day’s primary election.

State lawmakers’ concerns over deepfake use in elections date back to 2019, when lawmakers in California (CA AB 730) and Texas (TX SB 751) enacted bills into law to prohibit the use of deepfakes to influence political campaigns. Lawmakers in seven additional states introduced legislation this year aiming to reduce the harms that AI could pose to elections. Three states signed bills into law this year. Minnesota enacted legislation (MN HF 1370/SF 1394) to criminalize the use of deepfake technology to influence an election. Washington enacted a bill (WA SB 5152) to require disclosure when any manipulated audio or visual media is used in an electioneering communication. Michigan enacted a package of bills aimed at reducing the potential harms caused by AI in elections. These laws require disclosures (MI HB 5141) for pre-recorded phone messages and political advertisements that were created with AI, prohibit (MI HB 5144) distributing media that manipulates the speech or conduct of an individual within 90 days of an election without a disclaimer, and establish (MI HB 5145) sentencing guidelines for election law offenses related to deceptive media created with AI.

With the 2024 elections fast approaching, we anticipate restrictions on political deepfakes to be a major issue at the state level. We’re already seeing state lawmakers introduce dozens of bills addressing this issue during the early days of the 2024 session. New Mexico (NM HB 182) was the first to enact a new political deepfake law in 2024, followed by Indiana (IN HB 1133), Utah (UT SB 131), Wisconsin (WI AB 664), Idaho (ID HB 664), Oregon (OR SB 1571), New York (NY AB 8808), Florida (FL HB 919), and Mississippi (MS SB 2577). Federal authorities are also paying close attention and the FEC is contemplating a rules change to regulate AI-generated electoral content. To keep up with this issue, See the map and table below for real-time tracking of state and federal legislation related to political deepfakes sourced from MultiState’s industry-leading legislative tracking service.