Business leaders appear to have lost control over the deployment, oversight, and purpose of generative AI within their organizations, new research from Kaspersky suggests. That’s despite just 28% of organizations expressly permitting the use of generative AI, with even fewer (10%) having a formal generative AI use policy in place, according to new findings from ISACA.
It’s perhaps no surprise then that a recent survey by Add People discovered that one in three UK workers are using generative AI tools without their boss’ knowledge.
Executives admit “deep concern” about the security risks of generative AI takeover
Almost all (95%) of the 1,863 UK and EU C-level executives surveyed by Kaspersky believe generative AI is regularly used by employees, with over half (53%) stating that it is now driving certain business departments. The extent of the takeover is such that most executives (59%) express deep concerns about potential security risks that could jeopardize sensitive company information and result in the total loss of control of core business functions.
However, just 22% of respondents have discussed establishing rules and regulations to monitor the use of generative AI, despite 91% stating they need more understanding of how internal data is being used by employees to protect against critical security risks or data leaks, Kaspersky found.
Organizations lack sufficient generative AI policies, risk management
ISACA’s generative AI survey of 2300 global digital trust professionals found that while the use of generative AI is ramping up, most organizations do not have sufficient policies or effective risk management in place. The survey indicated that over 40% of employees are using generative AI regardless — a percentage is likely much higher given that 35% aren’t sure.
Employees are using generative AI in several ways, including to create written content (65%), increase productivity (44%), automate repetitive tasks (32%), provide customer service (29%), and improve decision-making (27%), according to ISACA.