The world will likely soon witness malware campaigns fully augmented and shaped by artificial intelligence (AI). Citing an arms race logic, cybersecurity luminary Mikko Hypp?nen said in a recent CSO article that the use of AI to enhance all aspects of disruptive cyber operations is virtually inevitable. As attackers have begun to use large language models (LLMs), deepfakes, and machine learning tools to craft sophisticated attacks at speed, cyber defenders have also turned to AI to keep up. In the face of quickening reaction times and automated obstacles to interference, the response for would-be attackers’ use of AI is obvious – double-down.
What does this near-term transformation of AI-centered cyber campaigns mean for national security and cybersecurity planners? Hypp?nen highlighted human-side challenges with spiraling AI usage that stem from the black box problem. As malicious cyber and information operations (IO) become more potent, defenders face a challenge that attackers don’t: letting a deep learning model loose as a defensive guardian will often produce actions that are difficult to explain. This is problematic for client coordination, defensive analytics, and more, all of which make the threat of bigger, smarter, faster AI-augmented influence campaigns feel more ominous.
Such techno-logistical developments stemming from AI-driven and -triggered influence activities are valid concerns. That said, novel information activities along these lines will also likely augur novel socio-psychological, strategic, and reputational risks for Western industry and public-sector planners. This is particularly true with regard to malign influence activities. After all, while it’s tempting to think about the AI-ification of IO purely in terms of heightened potency — i.e., the future will see “bigger, smarter, faster” versions of the interference we’re already so familiar with — history also suggests that insecurity will also be driven by how society reacts to a development so unprecedented. Fortunately, research into the psychology and strategy of novel technological insecurities offers insights into what we might expect.
The human impact of AI: Caring less and accepting less security
Ethnographic research into malign influence activities, artificial intelligence systems, and cyber threats provides a good baseline for what to expect from the augmentation of IO with machine-learning techniques. In particular, the past four years have seen scientists walk back a foundational assumption about how individuals respond to novel threats. Often called the “cyber doom” hypothesis, pundits, experts and policymakers alike have described forthcoming digital threats as having unique disruptive potential for democratic societies for nearly three decades. First, the general public recurrently encounters unprecedented security scenarios (e.g., the downing of electrical grids in Ukraine in 2015). Then they panic. In this way, every augmentation of technological insecurity opens space for dread, anxiety, and irrational response far more than what we might see with more conventional threats.
Recent scholarship tells us that the general public does respond this way to truly novel threats like that of AI-augmented IO, but only for a short time. Familiarity with digital technology in either a personal or professional setting – extremely commonplace – allows people to rationalize disruptive threats after just a small amount of exposure. This means that the chances of AI-augmented influence activities turning society on its head simply by dint of their sudden appearance are unlikely.
However, it would be disingenuous to suggest that the average citizen and consumer in advanced economies is well-adjusted to discount the potential for disruption that AI-ification of influence activities might bring. Research suggests a troubling set of psychological reactions to AI based on both exposure to AI systems and trust in information technologies. While those with limited exposure to AI trust it less (in line with cyber doom research findings), it takes an enormous amount of familiarity and knowledge to think objectively about how technology works and is being used. In something resembling the Dunning-Kruger effect, the vast majority of persons in between these extremes are prone to automation bias that manifests as an overconfidence in the potency of AI for all manner of activities.