

Government needs to step up to protect elections from AI-generated deepfake video and audio.

By Michael Waldman, J.D.
President
Brennan Center for Justice
Thereโs something scary online involving Kari Lake โ and itโs not what you might expect.
The nonprofit journalism site Arizona Agenda has a minute-long video from the TV news anchor turned GOP candidate, praising the siteโs work . . . and then halfway through, revealing that it is all a deepfake. Watch it here. Especially, watch it on a phone, where the glitches are less noticeable. This is new, and unnerving, and ominous.
It is now less than two years since ChatGPT was released, and the world began to debate how much change advances in generative artificial intelligence would bring. Are they like Gutenbergโs Bible, made possible by the new technology of the printing press? Or are they yet another techno-fad, more hype than impact? Over the coming years, all this will unfold with massive repercussions for our work, healthcare, and lives. (A guarantee: The Briefing is written by a live person, and always will be!)
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI on the information ecosystem, including through deepfakes like the one starring โKari Lake.โ (The real Lake, meanwhile, sent a cease-and-desist letter to the website.) That risk is especially high when it comes to audio, which can be easier to manipulate than visual imagery โ and harder to detect as fake.
Last year the Slovak presidential election may have been tipped by fake audio of a leading candidate that went viral days before the vote. In New Hampshire, bogus robocalls from โJoe Bidenโ urged voters to sit out the primary. In Chicagoโs mayoral election, a fake tape purported to feature a candidate musing, โIn my day, no one would bat an eyeโ if a police officer killed 17 or 18 people. The risk of doctored audio and video makes it harder to know what is real. Donald Trump has taken to decrying any video that makes him look bad as fake.
At the Brennan Center, we worry especially about how all this might affect the nuts and bolts of election administration. Recently we held a โtabletop exerciseโ with Arizona Secretary of State Adrian Fontes, one of the countryโs most effective public servants, and other election officials in the state. It featured a similar fake video starring Fontes, created for educational and training purposes. The verisimilitude was so unnerving that the recording was quickly locked away.
Hereโs a scenario we tested out: Youโre a local election official. Itโs a hectic Election Day and you get a call from the secretary of state. โThereโs been a court order,โ she says urgently. โYou need to shut down an hour early.โ When local workers receive a call like that, they should take a breath and call the secretary of stateโs office back. Youโll find out quickly that the call was actually a deepfake. Thatโs the kind of simple process that could catch the fraud before it takes root.
Government can take other steps, too. Weโve laid out many of them in a series of essays with Georgetownโs Center for Security and Emerging Technology. Often, officials need to take steps that would already make sense to protect against cyberthreats and other challenges.
There is more that needs to be done. One good step is to label AI-generated content as watermarked, making clear that AI was used to create or alter an image. Meta (aka Facebook) proudly unveiled such a system to label all content that was created with AI tools from major vendors such as OpenAI, Google, and Adobe. My colleague Larry Norden, working with a software expert, showed how easy it is to remove the watermarks from these images and circumvent Metaโs labeling scheme. It took less than 30 seconds.
So government will need to step up. Sen. Amy Klobuchar (D) of Minnesota, a leader on election security, is working with Sen. Lisa Murkowski (R) of Alaska and others to craft bills requiring campaign ads that make substantial use of generative AI to be labeled. That requires finesse, since courts will be wary of First Amendment issues. But it can be done. Such reform canโt happen fast enough.
After all, as the deepfake Kari Lake put it so well, โBy the time the November election rolls around, youโll hardly be able to tell the difference between reality and artificial intelligence.โ Thatโs . . . intelligent.
Originally published by the Brennan Center for Justice, 03.26.2024, under the terms of a Creative Commons Attribution-No Derivs-NonCommercial license.


