The irony of all this is that generative AI motivates sophisticated threat actors to double down on the established benefits of traditional techniques for exploitation, intrusion, and disruption. After all, those established approaches are tied to known risk payoff dynamics and thus are the only way that serious offensive cyber actors can avoid taking on the additional uncertainty tied to LLM usage.
CISOs: ignore the alarmism and live in the real world!
Amidst so much alarmist chatter about the potential threat of generative AI, it is of critical importance that CISOs ditch the hype and embrace a realistic view of how the new technology interacts with known conditions in the attacker-defender relationship. AI isn’t likely to see the realization of the offensive cybersecurity revolution so much as it is likely to see a gradual evolution of tools for both defenders and attackers to alter the minor details of their practice.
Naturally, CISOs need to realize that this dynamic applies to the defender almost as much as it does to the attacker. Routine automation helps the defender more than it does the attacker. After all, the defender knows exactly what the full extent of the battlespace (i.e. the networks, personnel, etc.) is going to be in some hypothetical future intrusion event. But attempts to use LLMs for active defense or other tasks that require adaptive, creative inputs are likely to suffer from the same unpredictability as the attacker’s AI-augmented compromise activities.