“If nothing else, generative AI does a great job at translating content, so countries that haven’t experienced many phishing attempts so far may soon see more,” McGladrey adds.
Others warn that other AI-enabled threats are on the horizon, saying they expect hackers will use deepfakes to mimic individuals — such as high-profile executives and civic leaders (whose voices and images are widely and publicly available for which to train AI models).
“It’s definitely something we’re keeping an eye on, but already the possibilities are pretty clear. The technology is getting better and better, making it harder to discern what’s real,” says Ryan Bell, threat intelligence manager at cyber insurance provider Corvus, citing the use of deepfake images of Ukrainian President Volodymyr Zelensky to pass along disinformation as evidence of the technology’s use for nefarious purposes.
Moreover, the Finnish report offered a dire assessment of what’s ahead: “In the near future, fast-paced AI advances will enhance and create a larger range of attack techniques through automation, stealth, social engineering, or information gathering. Therefore, we predict that AI-enabled attacks will become more widespread among less skilled attackers in the next five years. As conventional cyberattacks will become obsolete, AI technologies, skills and tools will become more available and affordable, incentivizing attackers to make use of AI-enabled cyberattacks.”
Hijacking enterprise AI
On a related note, some security experts say hackers could use an organization’s own chatbots against them.
As is the case with more conventional attack scenarios, attackers could try to hack into the chatbot systems to steal any data within those systems or to use them to access other systems that hold greater value to the bad actors.
That, of course, is not particularly novel. What is, though, is the potential for hackers to repurpose compromised chatbots and then use them as conduits to spread malware or perhaps interact with others — customers, employees, or other systems — in nefarious ways, says Matt Landers, a security engineer with security firm OccamSec.
Similar warnings recently came from Voyager18, the cyber risk research team, and security software company Vulcan. These researchers published a June 2023 advisory detailing how hackers could use generative AI, including ChatGTP, to spread malicious packages into developers’ environments.
Wuchnersays the new threats posed by AI don’t end there. He says organizations could find that errors, vulnerabilities, and malicious code could enter the enterprise as more workers — particularly workers outside IT — use gen AI to write code so they can quickly deploy it for use.
“All the studies show how easy it is to create scripts with AI, but trusting these technologies is bringing things into the organization that no one ever thought about,” Wuchner adds.
Quantum computing
The United States passed the Quantum Computing Cybersecurity Preparedness Act in December 2022, codifying into law a measure aimed at securing federal government systems and data against the quantum-enabled cyberattacks that many expect will happen as quantum computing matures.
Several months later, in June 2023, the European Policy Centre urged similar action, calling on European officials to prepare for the advent of quantum cyberattacks — an anticipated event dubbed Q-Day.
According to experts, work on quantum computing could advance enough in the next five to 10 years to reach the point where it has the capability of breaking today’s existing cryptographic algorithms — a capability that could make all digital information protected by current encryption protocols vulnerable to cyberattacks.
“We know quantum computing will hit us in three to 10 years, but no one really knows what the full impact will be yet,” Ruchie says. Worse still, he says bad actors could use quantum computing or quantum computing paired with AI to “spin out new threats.”
Data and SEO poisoning
Another threat that has emerged is data poisoning, says Rony Thakur, collegiate associate professor at the University of Maryland Global Campus’ School of Cybersecurity and IT.
With data poisoning, attackers tamper or corrupt the data used to train machine learning and deep-learning models. They can do so using a variety of techniques. Sometimes also called model poisoning, this attack aims to affect the accuracy of the AI’s decision-making and outputs.
As Thakur summarizes: “You can manipulate algorithms by poisoning the data.”
He notes that both insider and external bad actors are capable of data poisoning. Moreover, he says many organizations lack the skills to detect such a sophisticated attack. Although organizations have yet to see or report such attacks at any scale, researchers have explored and demonstrated that hackers could, in fact, be capable of such attacks.
Others cite an additional “poisoning” threat: search engine optimization (SEO) poisoning, which most commonly involves the manipulation of search engine rankings to redirect users to malicious websites that will install malware on their devices. Info-Tech Research Group called out the SEO poisoning threat in its June 2023 Threat Landscape Briefing, calling it a growing threat.
Preparing for what’s next
A majority of CISOs are anticipating a changing threat landscape: 58% of security leaders expect a different set of cyber risks in the upcoming five years, according to a poll taken by search firm Heidrick & Struggles for its 2023 Global Chief Information Security Officer (CISO) Survey.
CISOs list AI and machine learning as the top themes in most significant cyber risks, with 46% saying as much. CISOs also list geopolitical, attacks, threats, cloud, quantum, and supply chain as other top cyber risk themes.
Authors of the Heidrick & Struggles survey noted that respondents offered some thoughts on the topic. For example, one wrote that there will be “a continued arms race for automation.” Another wrote, “As attackers increase [the] attack cycle, respondents must move faster.” A third shared that “Cyber threats [will be] at machine speed, whereas defenses will be at human speed.”
The authors added, “Others expressed similar concerns, that skills will not scale from old to new. Still others had more existential fears, citing the ‘dramatic erosion in our ability to discern truth from fiction.'”
Security leaders say the best way to prepare for evolving threats and any new ones that might emerge is to follow established best practices while also layering in new technologies and strategies to strengthen defenses and create proactive elements into enterprise security.
“It’s taking the fundamentals and applying new techniques where you can to advance [your security posture] and create a defense in depth so you can get to that next level, so you can get to a point where you could detect anything novel,” says Norman Kromberg, CISO of security software company NetSPI. “That approach could give you enough capability to identify that unknown thing.”