Reliability and predictability
The way we interact with computers today is predictable. For instance, when we build software systems, an engineer sits and writes code, telling the computer exactly what to do, step by step. With an agentic AI process, we do not provide step-by-step instructions. Rather, we lead with the outcome we want to achieve, and the agent determines how to reach this goal. The software agent has a degree of autonomy, which means there can be some randomness in the outputs.
We saw a similar issue with ChatGPT and other LLM-based generative AI systems when they first debuted. But in the last two years, we’ve seen considerable improvements in the consistency of generative AI outputs, thanks to fine-tuning, human feedback loops, and consistent efforts to train and refine these models. We’ll need to put a similar level of effort into minimizing the randomness of agentic AI systems to make them more predictable and reliable.
Data privacy and security
Some companies are hesitant to use agentic AI due to privacy and security concerns, which are similar to those with generative AI but can be even more concerning. For example, when a user engages with a large language model, every bit of information given to the model becomes embedded in that model. There’s no way to go back and ask it to “forget” that information. Some types of security attack, such as prompt injection, take advantage of this by trying to get the model to leak proprietary information. Because software agents have access to many different systems with a high level of autonomy, there is an increased risk that it could expose private data from more sources.