In the three years since Americans last cast ballots in a presidential election, there’s been a global technological revolution that has upended the way we create and consume images, audio, and words.
And while campaign season kicked off in the United States last month with Republican presidential candidates taking the debate stage in Wisconsin, the potential influence of generative AI appeared months ago in ads featuring AI-generated images and audio.
Concerns about AI’s potential to facilitate the mass creation and distribution of mis- and disinformation abound, but experts said there’s also potential for the technology to have a positive impact when it comes to informing voters—though that’s unlikely to happen in time for the 2024 contests.
“I don’t think we have figured out how to manage the negative impacts of social media…and now we have this new technological wave to worry about,” Irina Raicu, who leads the Internet Ethics Program at Santa Clara University, said.
“All of the things we worried about with the impact of algorithmic feeds and how they shape political engagement are still there,” she explained. “Now they have this augmentation via generative AI, which can make the fake stuff more believable, easier to generate, faster.”
Elections in the AI era
It’s not just voters and candidates who could find themselves battling an onslaught of AI-generated mis- and disinformation. Online platforms, including social media sites and AI content generators, will also play a key role.
And platforms are beginning to construct policies governing the use and distribution of AI, despite a lack of guidance on a federal level: While the Federal Elections Commission is considering regulating the use of AI-generated deepfakes in campaign ads, Congress has yet to take definitive action on regulating AI broadly.
According to Katie Harbath, a former Facebook public policy director and National Republican Senatorial Committee digital strategist, that means online platforms will face major challenges when governing the use and distribution of AI.
“Platforms have to get ready [for] people to be stress-testing their platforms. There’s this kind of horse-and-cart problem that they have—should they have a policy even if they can only enforce it reactively at first, and not proactively?” she said. The main thing we can expect is that their approaches will evolve, she added.
“They can keep adapting—and they will, and we’ve seen that—and I know that drives people nuts, but the situations you’re in…continue to evolve,” Harbath said. “I would expect that, frankly, we should see multiple updates from all the platforms.”
Many platforms already have policies around political content. OpenAI, for example, prohibits use of its platform for “political campaigning or lobbying by generating high volumes of campaign materials,” among other things. And Google recently announced a policy requiring that AI-generated ads include “a clear disclaimer located somewhere that users are likely to notice,” the AP reported.
While policies may change, and frequently, the people who run elections are relying on platforms to anticipate and mitigate these new AI dilemmas in the months to come.
Tammy Patrick, CEO of Programs for the National Association of Election Officials, said that election administrators are anticipating that “old tactics” of distributing disinformation—about when and how to vote, for example—will resurface in 2023 with new vigor.
“In this new reality…it’s going to be potentially done in such a way that it’s more convincing that it’s actually an official person providing the wrong information, and that it’s going to potentially spread far more quickly than it ever has in the past,” Patrick told Tech Brew.
Strategies election officials could use to combat AI-powered disinformation will likely depend on the platform through which it’s being spread, she added.
“If it’s on the social media platforms it’ll be one thing, because it’ll be very reliant upon what that individual company is doing to ensure that the platform is being used for accurate and truthful information,” she said.
Offline issues: It’s not just social media platforms that election officials are concerned about, Patrick said. Other channels like phone, email, and good old-fashioned snail mail are also potential targets, she explained.
Each provides its own challenges, which are often exacerbated by the limited resources available to elections offices. Patrick pointed to the wide variance in web addresses for elections offices—while some have migrated to a .gov, which is heavily regulated, others are still using less secure methods, like personal email addresses, for example.
“If you have a small jurisdiction and you serve a few hundred or a few thousand voters, you’re probably a part-time election official, and you probably wear a lot of hats in your community,” Patrick said. “Those are the types of environments that will be ripe for those who have ill intent.”
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
False information that appears to come from election officials is a major concern, she concluded. Officials are looking to tech companies and other stakeholders to provide solutions quickly, she said.
“It’s only going to get worse before it gets better, and hopefully it won’t be at a point where it ever impacts voters’ understanding of when they vote, where they vote, how they can vote, because that can therefore impact the outcome of an election.”
A possible bright spot: Yamil Velez, an assistant professor of political science at Columbia University, sees the potential for campaigns to use AI to increase voter engagement through hyper-specific targeting.
“There might be increased engagement because they have a better sense of where the parties stand on issues that they might care about,” Velez said. Voter information guides tend to be quite dense, he added, and generative AI could help make that information more accessible.
“Ultimately what we want to test is, can you embed some of this information in a conversational AI and make it even more accessible, where voters can just have a free-flowing conversation with the bot about the political system and which party might come closest to their views,” he explained. “It’s just another tool in the arsenal of increasing people’s knowledge of the political system.”
Beyond 2024
Next year will be critical for elections, and not just because of the anticipated rematch between Presidents Joe Biden and Donald Trump. Throughout the year, voters will head to the polls in Russia, Mexico, Taiwan, and the European Union, to name just a few.
“The thing to remember is a lot of copycatting happens, of people seeing what campaigns are doing in the US and trying to use it for their elections,” Harbath said. “The impact of what happens here can have very global consequences.”
While many questions remain about the use of AI in election campaigns, it could also prove to be a useful tool to address outstanding elections concerns.
Mitchell Brown, a political science professor at Auburn University, said US election officials are also considering ways AI could improve things like accuracy and efficiency.
“I’ve also talked with election officials who are thinking about trying to use AI…to do things like engage in voter modeling, to predict, based on where likely voters are, what the better places to put vote centers might be,” Brown told Tech Brew, pointing to potential solutions to things like voting line length, lack of parking, and bias in precinct placement.
But those types of solutions are unlikely to be widely implemented in 2024, Brown said.
“Given how we underfund elections in this country, I can’t imagine that there are many election officials…that would have the technological sophistication. And so it would have to be on the vendor side,” she said. “My guess is wealthier jurisdictions will be able to have real conversations about how to use this technology—smaller and poorer jurisdictions won’t. All jurisdictions, though, will have to fight against it being weaponized.”
The future of AI and politics: Velez, at Columbia, said we may see transparency requirements for AI platforms that could provide more insight into the tech’s impact on politics.
“Just like we started seeing transparency in terms of social media, there might be new reporting requirements to assess how much political content is being generated,” Velez said. “Are people depending on these tools? How are they using them to understand the political system?”
AI ethicist Olivia Gambelin had an empowering take for people to take into 2024: “You are not powerless, this technology is not something that happens to you, it’s something that you interact with, and you can decide how that interaction happens.”
Because the US is “late to the game” in implementing controls around AI content, consumers have more work to do—reading multiple sources and being cognizant of targeted messaging, Gambelin said. “That’s the unfortunate part—you do have to put in the extra effort,” she added.
“In one way, we should be more engaged, we should be more informed…if fact-checking leads to a more informed conversation and a more informed public, then that’s a good outcome.”