As organizations of all sizes and sectors race to develop, deploy or buy AI and LLM-based products and services, what are the things they should be thinking about from a regulatory perspective? And if you’re a software developer, what do you need to know?
The regulatory approaches of the EU and US have, between them, firmed-up some of the more confusing areas. In the US, we’ve seen a new requirement that all US federal agencies have a chief AI officer and submit annual reports identifying all AI systems in use, any risks associated with them, and how they plan to mitigate those risks. This echoes the EU’s requirements for similar risk, testing, and oversight before deployment in high-risk cases.
Both have adopted a risk-based approach, with the EU specifically identifying the importance of “Security by design and by default” for “High-risk AI systems.” In the US, the CISA states that “Software must be secure by design, and Artificial Intelligence is no exception.”