“While they have been around for years, today’s versions are more realistic than ever, where even trained eyes and ears may fail to identify them. Both harnessing the power of artificial intelligence and defending against it hinges on the ability to connect the conceptual to the tangible. If the security industry fails to demystify AI and its potential malicious use cases, 2024 will be a field day for threat actors targeting the election space.”
Slovakia’s general election in September might serve as an object lesson in how deepfake technology can mar electops. In the run-up to that country’s highly contested parliamentary elections, the far-right Republika party circulated deepfakes videos with altered voices of Progressive Slovakia leader Michal Simecka announcing plans to raise the price of beer and, more seriously, discussing how his party planned to rig the election. Although it’s uncertain how much sway these deepfakes held in the ultimate election outcome, which saw the pro-Russian, Republika-aligned Smer party finish first, the election demonstrated the power of deepfakes.
Politically oriented deepfakes have already appeared on the US political scene. Earlier this year, an altered TV interview with Democratic US Senator Elizabeth Warren was circulated on social media outlets. In September, Google announced it would require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered, prompting lawmakers to pressure Meta and X, formerly Twitter, to follow suit.
Deepfakes are ‘pretty scary stuff’
Fresh from attending AWS’ 2023 Re: Invent conference, Tony Pietrocola, president of Agile Blue, says the conference was heavily weighted toward artificial technology regarding election interference.
“When you think about what AI can do, you saw a lot more about not just misinformation, but also more fraud, deception, and deepfakes,” he tells CSO.
“It’s pretty scary stuff because it looks like the person, whether it’s a congressman, a senator, a presidential candidate, whoever it might be, and they’re saying something,” he says. “Here’s the crazy part: somebody sees it, and it gets a bazillion hits. That’s what people see and remember; they don’t go back ever to see that, oh, this was a fake.”
Pietrocola thinks that the combination of massive amounts of data stolen in hacks and breaches combined with improved AI technology can make deepfakes a “perfect storm” of misinformation as we head into next year’s elections. “So, it is the perfect storm, but it’s not just the AI that makes it look sound and act real. It’s the social engineering data that [threat actors have] either stolen, or we’ve voluntarily given, that they’re using to create a digital profile that is, to me, the double whammy. Okay, they know everything about us, and now it looks and acts like us.”