AI adoption is accelerating rapidly, and security is racing to keep up with the changes it introduces.
While AI can transform employee productivity and workplace efficiency, it also amplifies existing data security challenges (which have often been deferred or neglected) and introduces some new ones.
Generative AI applications aren’t like traditional ‘deterministic’ applications that do the exact same thing every time you run them. Asking Generative AI image generation models to repeatedly “draw a picture of a kitten in a security guard uniform” is unlikely to generate the exact same picture twice (though they will all be similar).
This dynamism creates new value for businesses. However, it also introduces new types of security risks and makes existing static security controls less effective against this AI generation of applications.
This article will explore how organizations can leverage the symbiotic relationship between Zero Trust and AI to mitigate evolving security risks while still responsibly reaping the benefits of AI-powered innovation.
Generative AI-driven shifts
As more organizations work with Generative AI and test its boundaries, we’ve uncovered these key learnings:
- AI amplifies existing data governance challenges and increases the value of data: Generative AI amplifies the priority of data security and governance needs, which have often been previously deferred or neglected in favor of other priorities like endpoint, identity, network, security operations tooling, and more. In particular, organizations often find that they have not properly classified, identified, or tagged their data. This makes it hard to deploy Generative AI solutions because there’s no way to avoid accidentally training Generative AI systems on sensitive or confidential data.
At the same time, Generative AI also increases the value of data because of its ability to generate valuable insights from complex data sets. While this is great for organizations seeking to operationalize and monetize their data, it also increases the risk of cyber attackers targeting data for exploitation.
- Designing, implementing, and securing AI is a shared responsibility model: Much like the cloud, Generative AI operates under a shared responsibility model between AI providers and AI users. Depending on the model of the application, either the organization, the AI provider, or even the organization’s customers may be responsible for securing the AI platform, application, and usage.
- You must build guardrails for Generative AI models: Generative AI models by themselves often have few built-in controls, so you must carefully consider what data these models are trained on and can access. You must also carefully plan application controls to drive secure and reliable outcomes. For example, Microsoft Copilot implements application controls that respect your organization’s identity model and permissions, inherit your sensitivity labels, applies your retention policies, support auditing of interactions, and follow your administrative settings.
- Generative AI has amazing potential, but capabilities and security controls are still in early days: We should be optimistic of Generative AI’s potential but also be realistic on what the technology can do today. Under today’s Generative AI chat model, users can leverage natural language interfaces to accelerate productivity and accomplish many advanced tasks without needing specific skills or training. This doesn’t mean that AI can do everything a human expert can do or that it will do those tasks perfectly, though.
In Microsoft’s experience with launching and scaling Security Copilot across customer environments, we’ve found that Generative AI excels at specific Security Operations (SecOps/SOC) tasks like guiding incident responders, writing up incident status/reports, analyzing incident impacts, automating tasks, and reverse engineering attacker scripts.
Ultimately, these learnings underscore how AI introduces both powerful opportunities and challenges that have to be managed. It’s critical to adopt a thoughtful approach to security strategy and controls to ensure organizations can safely leverage the transformative power of AI.
How Zero Trust addresses AI challenges
Once organizations realize that a network security perimeter cannot protect their assets against today’s attackers, Zero Trust acts as a principle-driven approach that guides organizations through the complex security challenges that follow. Zero Trust standards and guidance have been published by NIST, The Open Group, Microsoft, and others to guide organizations on this journey.
This approach works due to the symbiotic relationship between Zero Trust and AI. Zero Trust secures AI applications and their underlying data using an asset-centric and data-centric approach. Meanwhile, AI accelerates Zero Trust security modernization by enhancing security automation, offering deep insights, providing on-demand expertise, speeding up human learning, and more.
This relationship between AI and Zero Trust is not just about enhancing security; it’s about enabling innovation and agility in a rapidly evolving digital landscape. Security leaders and teams must provide calm, critical thinking to balance the exuberance of AI projects. However, it’s equally critical to collaboratively find a way to safely say ‘yes’ to these business initiatives.
To learn more about you can create an agile security approach that dynamically adapts to changing threats and protects people, devices, apps, and data wherever they’re located, visit Microsoft’s Zero Trust page.