Deepfake attacks are on the rise, and they’re hitting consumers where it hurts: in their bank accounts.
According to a new report from security tech company Pindrop, 67% of respondents said they’re concerned about the use of deepfakes and voice clones in the banking and finance sector. Those concerns aren’t unfounded, Pindrop CEO Vijay Balasubramaniyan told Tech Brew.
“Banks and financial institutions are almost [always] the first target for any new, sophisticated attack vector. And that’s because they offer the easiest way to money,” he said. “If I’m able to take over a banking account, I can actually, immediately, potentially wire-transfer money, order new credit cards, perform transactions—and I get real money very quickly.”
Pindrop’s report, released May 22, draws on the company’s insights from providing multifactor authentication and deepfake detection services to top US banks and insurers, as well as major retailers and healthcare providers, Balasubramaniyan said.
Since deepfakes burst into the public eye around 2019 with a high-profile, altered video of Nancy Pelosi, they’ve amused, confused, and exploited the images of unsuspecting people. (A Department of Homeland Security report notes that 95% of deepfakes depict “nonconsensual porn of women,” at least as of 2021.) But the potential for business losses are very real as well, as design firm Arup saw when it lost $25 million to a deepfake scam earlier this year.
According to Pindrop’s report, customer call centers and advisors for high-net-worth individuals are increasingly seen as prime targets for such attacks. Deepfake technologies can allow bad actors to impersonate clients so convincingly that private wealth managers can be tricked into making transactions without their real client’s consent, Balasubramaniyan said.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
“We’ve seen a lot more usage of deepfakes when the accounts have high dollar amounts associated with them,” he told us. “In a lot of cases, these high-net-worth individuals are famous or, you know, have big roles and responsibilities. And so it’s easier to get audio and video of those people to go after them.”
The attackers can still strike even if they have less information about a banking customer. In one instance, Balasubramaniyan said, the attackers used a synthetic voice with an accent to pass as a customer who is a Latin American woman.
“The call center agent didn’t know the difference,” he said. “When they’re actually cloning your voice, that’s what we call a deepfake, as opposed to synthetic speech, where they’re just getting close to your demographic.”
Either approach can be convincing to the right audience and result in the loss of thousands of dollars. That’s why Pindrop is trying out a new way to boost confidence in its own deepfake-detection technology: a warranty program.
Under this offering, also announced May 22, call centers using Pindrop’s services will be reimbursed for losses up to $1 million if the company fails to flag a fraudulent voice.
For Balasubramaniyan, the initiative is doubly beneficial: It creates a sort of bug-bounty program to help Pindrop’s engineers investigate calls that evade them. And it has the potential to create a new gold standard for security companies that are willing to back up their advertised capabilities.
“My hope is that [it] creates a set of responsible startups or responsible companies in the space that actually put their money where their mouth is,” he said.