With the marathon US presidential election season getting underway just as AI-generated fakery reaches new heights of believability, experts fear the confluence could stress-test public trust in media and politics.
Rijul Gupta, co-founder and CEO of DeepMedia AI, told Tech Brew he’s seen interest in his startup’s deepfake detection from social platforms, news outlets, and campaigns looking to bolster their defenses for the upcoming political race. Last week, the startup rolled out its first public deepfake detection platform, DeepID, designed to suss out “synthetic audio, video, text, and image manipulation.”
The startup also began work earlier this year to fulfill a contract valued at $1.25 million from the Air Force Research Laboratory to integrate its tools into Department of Defense applications. The stated purpose is “rapid and accurate deepfake detection to counter Russian and Chinese information warfare.”
“A lot of major governments are concerned in three major areas: There’s political, both domestic and foreign. There’s militaristic, such as fake videos coming out of Russia and Ukraine…and there’s also financial—the idea that you could have fake images, fake voices, and videos that come out that have major financial impacts,” Gupta said.
Given that background, it may come as a surprise that DeepMedia traffics in synthetic media, albeit in what it claims is a more responsible way, through products like DubSync, which is designed to translate videos into different languages. Gupta claims the two-pronged business model is what gives the company an edge in what is often a cat-and-mouse game of staying one step ahead of bad actors.
“Everyone else in this space trying to do AI verification and media authentication, they’re very reactionary,” Gupta said. “We keep up to date on the generative side of the research. We’re reading all the papers coming out from these research institutions. We’re seeing the techniques, we can identify patterns, and see where this is going to go.”
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
DeepMedia uses its own tools to generate fake images, audio, and video, which it then feeds into its detection tools to try to improve their accuracy. The company claims the models can now detect fake faces and voices with 99% accuracy, and image manipulation with 95% accuracy.
As AI approximations of videos and images have become more lifelike in recent years, many startups have entered the market to develop detection tools. Governments are also setting aside portions of defense budgets in anticipation of the threat; the US’s Defense Advanced Research Project Agency is slated to spend close to $30 million this year on its program to “defend against the falsification of multimedia and disinformation campaigns.”
And companies are banding together in groups like Adobe’s Content Authenticity Initiative, with the goal of creating watermark-like labels to verify authenticity.
But Gupta, who co-founded DeepMedia in 2017, said the challenge of the upcoming election could lie more in convincing the public of the truth than in the technical details of detecting deepfakes.
“The 2024 election is going to be the deepfake election. It’s going to test our ability as a society to detect deepfakes. Not from necessarily a technical perspective—deepfakes are detectable; we can do that,” Gupta said. “The real challenge is operating at scale an integration with the platforms. And then, of course, the other challenge is actually getting people to believe it.”