The convention, to which a POLITICO reporter was granted access, wasn’t a gathering of Luddites. Most were clear-eyed about the need to train campaign staff on how to use the technology. But operatives emphasized the need to teach voters how to identify misinformation and disinformation powered by AI.
“Sources of information that essentially work to have true information about the world will go from being nice to have to being absolutely indispensable,” Dennis added. “I hope we’re preparing for that on a platform level. I hope voters are educating themselves.”
Earlier this year, the American Association of Political Consultants’ board of directors unanimously condemned the use of deep fakes in political advertising. “The use of ‘deep fake’ generative [AI] content is a dramatically different and dangerous threat to democracy,” Becki Donatelli, a Republican digital consultant and president of the AAPC, said in a statement at the time.
For the untrained eye, it’s not always easy to tell if an image is fake. Take, for instance, a video the Ron DeSantis campaign posted on Twitter showing seemingly manufactured images of former President Donald Trump hugging Anthony Fauci. There is currently no federal requirement to include a disclaimer in campaign ads when AI is used to create images, though such bills have been introduced in Congress. Washington state recently signed into law legislation that would require a disclosure when AI is used in campaign ads.
Last month at CampaignTech East, a confab of digital campaign experts, commissioners from the Federal Election Commission offered opinions about regulating AI in campaigns. Democratic Commissioner Shana Broussard suggested that current FEC political ad disclosure rules could be adapted to encompass AI, whereas Republican Commissioner Trey Trainor said he was in favor of “as little regulation as possible.”
Democratic operatives at the Arena meeting said they are skeptical about industry-wide regulation happening before the 2024 elections. Betsy Hoover, co-founder of Higher Ground Labs, a progressive political tech incubator, called it a “strategic move” for Democratic campaigns to take the lead on establishing guidelines when AI is used in voter outreach.
AI can have plenty of practical uses like automating some of the more tedious and time-consuming tasks required on campaigns. It can fill the gaps for smaller, down-ballot operations that don’t have the resources to, say, generate multiple versions of a graphic or to analyze data, Hoover said.
American Bridge’s Dennis said if campaigns are fabricating footage, that could open the door for lawsuits. But he’s less concerned if they’re “using AI to do the same kind of attack stuff they’ve always done, just faster and cheaper.”
But the potential for misinformation and disinformation underscores the need for humans on staff, Hoover argued. Campaigns, she said, should be thinking about how their programs are prepared to respond to a “less stable media environment.”
“People are going to be looking for trusted messengers more than ever,” she said. “In the same way in 2016 and 2020, we started to say, ‘Okay, we need to invest in influencer messaging and relational organizing,’ those things become more important this cycle.”