AI

Who should be responsible for stopping political deepfakes?

A new California law says platforms need to police themselves, the FTC wants to hold AI tools liable, but the legal picture remains mostly unclear.
article cover

Francis Scialabba

5 min read

Ask Meta’s AI chatbot to generate an image of a presidential debate, and the platform’s usual commitment to photorealism is replaced by a clunky cartoonishness.

Fed the same prompt, Microsoft’s Image Creator produced a scene of two figures who vaguely resembled the Democratic and Republican candidates but for their indistinct features and a general lack of detail.

Meanwhile, X’s Grok image generator will readily offer up near-photo-quality images of the two candidates.

With the US presidential election now in the home stretch, most of the major image generators seem to be taking sometimes subtle steps to steer users away from creating election deepfakes. Large online platforms, many of which now have AI generation features embedded, have—mostly—made moves to crack down on election-related misinformation.

Yet reports show that AI-powered misinformation has continued to spread online in recent months, though experts say its true scope and influence is hard to gauge. And with the bad actors behind these deepfakes sometimes tricky to pin down, some new regulatory pushes around the issue have raised the question of holding tools that create deepfakes and platforms that spread them liable.

“Who has the most money to go after is my question,” Veronica Torres, worldwide privacy and regulatory counsel at identity verification company Jumio. “It’s a hard question to answer, because the answer should be both…The producer of the content should be the one who has the most fault, but they might not be the ones who are easily identifiable. And so there’s a certain level of responsibility across the different use cases and across the different distributors along the line.”

Steps so far

A California law passed last month holds large online platforms accountable for removing or labeling synthetic media spread by users. The Federal Trade Commission finalized a rule earlier this year that would hold AI companies liable for fraud committed with their tools in certain cases. Congressional reps on each side of the aisle have called for developer liability, and a poll from the AI Policy Institute earlier this year found that 70% of Americans support legislation that would hold these companies liable.

Opponents of the California law say it relies too much on detection capabilities, which are indeed sometimes spotty; gives platforms undue ability to determine what is election information; and could chill political free speech. Any legislative effort to go after platforms will also need to carve out exceptions to Section 230 of the Communications Decency Act, which shields platforms from liability for what their users post.

And tech companies and developers have pushed back on other laws that seek to make AI companies responsible for what their tools create, arguing that it could stifle innovation and have a detrimental effect on the open-source community, where developers don’t have ultimate control over who uses their software.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

For its part, Meta has already attempted to implement a detection and labeling system for content posted on its platforms, whether generated with its tools or not. In its most recent platform update last month, the company said it now gives users more information about how exactly content was modified.

However, the AI label is only visible in Facebook or Instagram’s mobile apps—not on the Facebook site on mobile or desktop browser. TikTok also offers a label and purports to detect AI images. Neither Meta nor TikTok immediately responded to a request for comment.

On the other hand, X mostly relies on its Community Notes feature to label misinformation, and under the ownership of Elon Musk, the company has scrapped some of its previous efforts to crack down on the problem.

Who’s responsible?

Clarissa Cerda, chief legal officer at Pindrop, a security company that offers audio deepfake detection tools, said that while the onus should mostly be on the bad actors themselves, big platforms need to bear some responsibility for the content they circulate.

“There has to be some modicum of appropriate responsibility for the large online platform providers that have this great power of dissemination,” Cerda said. “They need to be responsible and aware of the impact that their actions could have on society and dissemination in connection with elections, particularly when half the world is up for elections in 2024.”

But sizing up the scope of the problem can be difficult, especially when AI image detection is not always reliable. Sez Harmon, an AI policy analyst at the Responsible AI Institute, said “thousands of AI-edited images, videos, and audio recordings of the presidential candidates” have been uploaded to social platforms, but it’s unclear how much impact they’ve had. Other democratic countries have already had high-profile instances of deepfake electioneering trickery, however, she said.

“Deepfakes have been rampant during this election cycle, but their influence is challenging to gauge,” Harmon said in an email.

With at least 20 US states having passed political deepfake laws—per Public Citizen’s tracker—and multiple bills working their way through Congress, the regulatory situation is still very much in flux.

“A lot is left up in the air right now related to how deepfakes are treated legally in the US, and who can be held liable for their creation and distribution,” Harmon said. “As more AI tools become publicly available, I hope we see comprehensive federal legislation.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

T
B