The mediation of this effect by the oversimplification mentality, unfortunately, suggests that more is required. Specifically, discussion of the foundational functionality of AI systems needs to be married to as many diverse outcomes as possible to emphasize the dynamism of the technology.
AI education and training must emphasize the variability of outcomes based on social, political, commercial, and security decision inputs. Cybersecurity employees must be guided as much as possible toward understanding the path-dependent effects that exist with regards to variables such as differences in data used for training, bias in interfaces used to consume and annotate incoming information, and more.
A particular opportunity for this objective would be the establishment of penetration testing requirements that engage a cross-section of workforces that are adopting new AI tools. In other words, new platforms or systems must be tested by representative cross-samples of the security populations that might use them, necessitating a requirement for adopters or developers to offer accessibility testing options to users at the lowest possible skillset denominator.