It’s a matter of days before American voters go to the polls, and there’s been some debate over whether the “AI election” has actually come to pass.
Major presidential campaigns don’t seem to be terribly interested in the technology as an electioneering tool, according to the New York Times, unlike races in a few other countries. But a swirl of online misinformation around an especially brutal hurricane season has showcased how damaging AI’s ill effects can be on the media ecosystem. And companies and state agencies continue to warn about malicious interference from foreign actors.
We’ve rounded up some of the recent headlines on this front below in the latest (and potentially last) occasional roundup of AI and election news.
- Hurricanes Helene and Milton spawned a storm of misinformation around weather-based conspiracy theories and relief-effort falsehoods, some of it aided by AI content. Sensationalized AI images and video of flood damage went viral on social platforms.
- OpenAI said in its latest report on election interference this month that it has disrupted more than 20 influence operations worldwide attempting to use its models so far this year, including some aimed at the United States. Most of these efforts seem to have had marginal impact: “Threat actors continue to evolve and experiment with our models,” the authors wrote, “but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences.”
- A judge blocked on free speech grounds a new California law that would have allowed individuals to sue for damages from political deepfakes. It was one of a trio of election deepfake laws Governor Gavin Newsom signed last month, and the only one that was set to take effect immediately. The court clash highlights the challenges of strictly regulating deepfakes.
- Whatever the actual extent of AI misinformation, most US adults are worried. A Pew research poll found that 57% of respondents—split equally between Democrats and Republicans—were extremely or very concerned that election influencers will use AI to create and spread false content. Around 39% said they expect AI to be used mostly for bad in elections and 27% said equally for good and bad. Only 5% said mostly for good.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
One of the difficulties in sizing up AI’s influence in the election is that generated content—especially text—can be tricky to detect. It’s also hard to tell how much difference a given piece of AI content might have made in voters’ minds, according to Sez Harmon, AI policy analyst at the Responsible AI Institute.
“Deepfakes have been rampant during this election cycle, but their influence is challenging to gauge,” Harmon said in an email. “Thousands of AI-edited images, videos, and audio recordings of the presidential candidates were uploaded across social media platforms this year, but I cannot speak to how this synthetic media is changing voter opinions.”
And while AI can scale and amplify misinformation, lower-tech trickery can sow it as well.
“Some of the posts that seemed to make the biggest headlines in the US this election cycle were not technically deepfakes, but videos falsifying information or misrepresenting political parties and their constituencies with real audio and video,” Harmon said.
That said, examples from other countries, like India’s election this past spring, show how damaging deepfakes can be for politics, Harmon said. “Other democratic countries experienced more tumult this year from political deepfakes.”