CISOs may be intimately familiar with the dozens of forms of authentication for privileged areas of their environments, but a very different problem is arising in areas where authentication has traditionally been neither needed nor desired.
Domains such as sales call centers or public-facing sites are fast becoming key battlefields over personhood, where AI bots and humans commingle and CISOs struggle to reliably and quickly differentiate one from the other.
“Bad bots have become more sophisticated, with attackers analyzing defenses and sharing workarounds in marketplaces and message boards. They’ve also become more accessible, with bot services available to anyone who can pay for them,” Forrester researchers wrote in the firm’s recent Forrester Wave: Bot Management Software, Q3 2024. “Bots may be central to a malicious application attack or attempted fraud, such as a credential-stuffing attack, or they may play a supporting role in a larger application attack, performing scraping or web recon to help target follow-on activities.”
Forrester estimates that 30% of today’s Internet traffic comes from bad bots.
The bot problem goes beyond the cost issue of fake network traffic, however. For example, bot DDoS attacks can be launched against a sales call center, clogging lines with fake customers in an attempt to frustrate real customers into calling competitors instead. Or bots could be used to swarm text-based customer service applications, generating the surreal scenario of your service bots being tied up in circuitous conversations with an attacker’s bots.
Credentialling personhood
What makes these AI-powered bots so dangerous is that they can be scaled almost infinitely for a relatively low cost. That means an attacker can easily overwhelm even the world’s largest call centers, which often do not want to add the friction involved with authentication methods.
“This is a huge issue. These deepfake attacks are automated so there is no way for a human interface call center to scale up as quickly or as effectively as a server array,” says Jay Meier, SVP of North American operations at identity firm FaceTec. “This is the new DDoS attack and it will be able to easily shut down the call center.”
Meier’s use of the term deepfake is worth noting, as today’s deepfakes are typically thought of as precise imitations of a specific person, such as the CFO of the targeted enterprise. But with bot attacks such as these, they will be imitating a generic composite person who likely doesn’t exist.
One recently publicized attempt to negate such bot attacks comes from a group of major vendors, including OpenAI and Microsoft, working with researchers from MIT, Harvard, and the University of California, Berkeley. The resulting paper outlined a system that would leverage government offices to create “personhood credentials” to address the fact that older web systems designed to block bots, such as CAPTCHA, have been rendered useless because generative AI can select images with, say, traffic signals just as well — if not better — than humans can.
A personhood credential (PHC), the researchers argued, “empowers its holder to demonstrate to providers of digital services that they are a person without revealing anything further. Building on related concepts like proof-of-personhood and anonymous credentials, these credentials can be stored digitally on holders’ devices and verified through zero-knowledge proofs.”
In this way, the system would reveal nothing of the individual’s specific identity. But, the researchers point out, a PHC system would have to meet two fundamental requirements. First, credential limits would need to be imposed. “The issuer of a PHC gives at most one credential to an eligible person,” according to the researchers. Second, “service-specific” pseudonymity would need to be employed such that “the user’s digital activity is untraceable by the issuer and unlinkable across service providers, even if service providers and issuers collude.”
One author of the report, Tobin South, a senior security researcher and PhD candidate at MIT, argued that such a system is critical because “there are no tools today that can stop thousands of authentic-sounding inquiries.”
Government offices could be used to issue personhood credentials, or perhaps retail stores as well, because, as South points out, bots are growing in sophistication and “the only thing we are confident of is that they can’t physically show up somewhere.”
The challenges of personhood credentials
Although intriguing, the personhood plan has fundamental issues. First, credentials are very easily faked by gen AI systems. Second, customers may be hard-pressed to take the significant time and effort to gather documents and wait in line at a government office to prove that they are human simply to visit public websites or sales call centers.
Some argue that the mass creation of humanity cookies would create another pivotal cybersecurity weak spot.
“What if I get control of the devices that have the humanity cookie on it?” FaceTec’s Meier asks. “The Chinese might then have a billion humanity cookies at one person’s control.”
Brian Levine, a managing director for cybersecurity at Ernst & Young, believes that, while such a system might be helpful in the short run, it likely won’t effectively protect enterprises for long.
“It’s the same cat-and-mouse game” that cybersecurity vendors have always played with attackers, Levine says. “As soon as you create software to identify a bot, the bot will change its details to trick that software.”
Is all hope lost?
Sandy Cariella, a Forrester principal analyst and lead author of the Forrester bot report, says a critical element of any bot defense program is to not delay good bots, such as legitimate search engine spiders, in the quest to block bad ones.
“The crux of any bot management system has to be that it never introduces friction for good bots and certainly not for legitimate customers. You need to pay very close attention to customer friction,” Cariella says. “If you piss off your human customers, you will not last.”
Some of the better bot defense programs today use deep learning to sniff out deceptive bot behavior. Although some question whether such programs can stop attacks — such as bot DDoS attacks — quickly enough, Cariella believes the better apps are playing a larger game. They may not halt the first wave of a bot attack, but they are generally effective at identifying attacking bots’ characteristics and stopping subsequent waves, which often happen within minutes of the first attack, she says.
“They are designed to stop the entire attack, not just the first foray. [The enterprise] is going to be able to continue doing business,” Cariella says.
CISOs must also collaborate with C-suite colleagues for a bot strategy to work, she adds.
“If you take it seriously but you are not consulting with fraud, marketing, ecommerce, and others, you do not have a unified strategy,” she says. “Therefore, you may not be solving the entire problem. You have to have the conversation across all of those stakeholders.”
Still, Cariella believes that bot defenses must be accelerated. “The speed of adaptation and new rules and new attacks with bots is a lot faster than your traditional application attacks,” she says.
Steve Zalewski, longtime CISO for Levi Strauss until 2021 when he became a cybersecurity consultant, is also concerned about how quickly bad bots can adapt to countermeasures.
Asked how well software can defend against the latest bot attacks, Zalewski replied: “Quite simply, they can’t today. The IAM infrastructure of today is just not prepared for this level of sophistication in authentication attacks hitting the help desks.”
Zalewski encourages CISOs to emphasize objectives when carefully thinking through their bot defense strategy.
“What is the bidirectional trust relationship that we want? Is it a live person on the other side of the call, versus, Is it a live person that I trust?” he asks.
Many generative AI–created bots are simply not designed to sound realistically human, Zalewski points out, referring to banking customer service bots as an example. These bots are not supposed to fool anyone into thinking they are human. But attack bots are designed to do just that.
And that’s another key point. People who are used to interacting with customer service bots may be quick to dismiss the threat because they think bots using perfectly articulate language are easy to identify.
“But with the malicious bot attacker,” Zalewski says, “they deploy an awful lot of effort.”
Because a lot is riding on tricking you into thinking you are interacting with a human.