Three-quarters of global businesses are currently implementing or considering bans on ChatGPT and other generative AI applications within the workplace, with risks to data security, privacy, and corporate reputation driving decisions to act. That’s according to new research from Blackberry which found that 61% of companies deploying/considering generative AI bans view the steps long term or permanent. Blackberry’s findings draw from a survey of 2,000 IT decision makers across North America (USA and Canada), Europe (UK, France, Germany, and the Netherlands), Japan, and Australia.
The data come a week after the publication of the OWASP Top 10 for LLMs which details the key security and safety challenges associated with large language models (LLMs), which many generative AI chatbots are built on. It also comes as organizations are facing up to the reality of needing to implement specific generative AI security policies amid the skyrocketing growth and adoption of the technology within businesses. One key question on many people’s minds is the extent to which generative AI is ushering in a new era of shadow IT.
Security concerns driving generative AI bans
Despite the majority of the IT decision makers surveyed recognizing the opportunity for generative AI applications in the workplace to increase efficiency (55%) and innovation (52%), and enhance creativity (51%), 83% voiced concerns that unsecured generative AI apps pose a cybersecurity threat to their corporate IT environment, driving inclination towards complete bans, according to Blackberry. What’s more, while 81% of respondents are in favor of using generative AI tools for cybersecurity defense to avoid being caught flat-footed by cyber criminals, 80% believe organizations are within their rights to control the applications that employees use for business purposes.
Organizations should take a cautious, yet dynamic approach to generative AI applications in the workplace, said Shishir Singh, CTO cybersecurity at BlackBerry. “Banning generative AI applications in the workplace can mean a wealth of potential business benefits are quashed. As platforms mature and regulations take effect, flexibility could be introduced into organizational policies. The key will be in having the right tools in place for visibility, monitoring and management of applications used in the workplace.”
CISOs must develop generative AI policies that tackle risk without stifling innovation
Appropriate, business-aligned security policies controlling the use of generative AI should be high on the CISO’s agenda right now. The challenge for CISOs is to develop cybersecurity policies that not only embrace and support business adoption of this technology but effectively address risk without stifling innovation. Any who think they can put this off for a year or two to see how generative AI develops, hoping to retrofit a security policy appropriate for generative AI’s pervasiveness later down the line, should carefully consider what happened with shadow IT. Businesses were slow off the mark from a security policy perspective to deal with personal technology when it began being used for corporate activities.