From boardroom conversations to industry events, “artificial intelligence” is the buzz phrase that’s reshaping how we collectively view the future of security. The perspectives are diverse, to say the least. Some insist that AI is a long overdue silver bullet, while others believe it will gradually destroy digital society as we know it.
When it comes to emerging technologies, these hype cycles—and the bold claims that accompany them—often don’t fully align with reality. While threat actors are absolutely using AI to augment and streamline their efforts, the sensational scenarios we often hear about are still largely theoretical.
Defenders need a clear assessment of how AI is shifting the cybercrime ecosystem today, along with insights as to how it’s positioned to evolve in the future—not more fear, uncertainty, and doubt. By separating fact from fiction when it comes to AI and cybercrime, security teams everywhere will be better equipped to adjust their defense strategies, anticipate how attackers may use AI in the future, and effectively protect critical assets.
The democratization of AI offers attackers new capabilities
As with any new technology, it’s easy to assume that cybercriminals are using AI to create brand new attack vectors. Yet instead of using AI to reinvent threats entirely, attackers are primarily using this tool to turbocharge their existing operations. Threat actors are relying on AI to drive the efficiency, accuracy, and scale of techniques, such as social engineering and malware deployment. For example, cybercriminals are using AI-generated tools like FraudGPT and ElevenLabs to craft phishing and vishing attacks that mimic a company executive’s tone and style, making it increasingly difficult for a recipient to recognize the potential threat.
Adversaries are using these AI tools for localized language support as well, making it easy to develop communications to use and reuse virtually anywhere around the globe.
The democratization of AI is driving these shifts in attacker capabilities, and even novice threat actors can now execute successful (and lucrative) attacks. Much of what once required, ranging from substantial coding expertise to calculated logistics planning, is now easily attained through AI. Threat actors are relying on AI as an “easy button,” using the technology to automate labor-intensive tasks like scaling their reconnaissance efforts, developing highly personalized and contextually relevant social engineering communications, and optimizing existing malicious code to dodge detection.
AI is also impacting what’s available on the dark web, with AI-as-a-Service models flourishing in the cybercriminal underground. Much like ransomware-as-a-service models that have become common over the past decade, adversaries can buy AI-enhanced services that offer reconnaissance tools, deepfake generation, social engineering kits targeted for specific industries or languages, and so much more. The result is that cybercrime is becoming increasingly cheaper, faster, more targeted, and harder to detect.
The evolution of AI: Preparing defenders for tomorrow’s threats
As security professionals chart their defensive strategies, we must consider how AI will reshape cybercrime in the coming years. We also need to anticipate the fundamental pivots attackers will make, and what this evolution means for our entire industry. AI will inevitably impact vulnerability discovery, enable the creation of novel attack vectors, and drive the use of autonomous agent swarms. Future AI advancements will also accelerate the discovery of zero-day vulnerabilities, which poses a serious concern for defenders.
Beyond using AI to mine for fresh vulnerabilities, cybercriminals will use AI to develop new attack vectors. While this isn’t occurring today, it’s a concept that will become reality in the future. For example, attackers could exploit vulnerabilities within AI systems themselves or carry out sophisticated data poisoning attacks targeting the machine learning models organizations use.
Finally, while a group of autonomous agent swarms conducting entire cyberattacks doesn’t seem plausible today, it’s crucial that the cybersecurity community monitors the ways in which threat actors are incrementally harnessing automation to turbocharge their attacks.
Building a cyber resilient future
Countering more advanced AI-driven threats requires that we collectively evolve our defenses, and the good news is that many security practitioners are already starting to adapt. Teams are using frameworks like MITRE ATT&CK to map attack chains and are deploying AI for predictive modeling and anomaly detection. Additionally, defenders need to focus on activities like AI-powered threat hunting and hyper-automated incident response capabilities, and they potentially need to rethink their security architectures.
Let’s not forget that AI gives cybercriminals a new level of agility that is difficult for security practitioners to match. To shrink this divide, security leaders need to consider how bureaucracy or siloed responsibilities may be hindering their defense strategies and adjust accordingly. Malicious actors are already using AI to accelerate the attack lifecycle, and we need to be able to defend against their efforts without always having to scale human involvement.
Beyond making strategic and tactical adjustments to our defenses, public-private partnerships are equally critical to our collective success. These efforts must inform policy changes as well, requiring the proactive development of new frameworks as well as standardized norms about AI use and misuse that are accepted and adhered to around the world.
AI will continue to impact every aspect of cybersecurity. No single organization, regardless of resources or expertise, can successfully navigate this shift alone. Success will depend not just on technology, but on cooperation, flexibility, and our ability to adapt to a changing reality.