Naturally, what needs to be protected depends on the line of business. Whereas Planview is concerned with protecting IP, Wall Street English is mindful of cultural sensitivities. They adjust their course content to avoid offending students, and their AI tools need to do the same. “Just as we ringfence our online classes with trained teachers to guarantee nothing inappropriate is said, we must ensure that AI avoids expressing unintended opinions or inappropriate content,” says Hortal. “We employ techniques, such as input sanitization, contextual tracking, and content filtering, to mitigate risks and vulnerabilities. All of these things are part of our AI governance.”
Whatever you’re protecting, the rules shouldn’t stop within your own organization. Efforts should be made to ensure the same protections can be guaranteed when work is outsourced. “Some of the most sophisticated companies in the world have an amazing AI governance structure internally,” says Matt Kunkel, CEO of LogicGate, a software company that provides a holistic governance, risk, and compliance (GRC) platform. “But then they ship all their data over to third parties who use that data with their large language models. If your third parties aren’t in agreement with your AI usage policies, then at that point, you lose control of AI governance.”
Start now
The most common advice coming from IT leaders who have already implemented AI governance is to start now. From the time IT leadership starts working on AI governance to when they communicate the rules across their organization could be months. A case in point, it took Planview about six months from when they began thinking through their policy to when they made it available to the whole company in their learning management system.