“The trouble with too many people is they believe the realm of truth always lies within their vision,” Abraham Lincoln famously said. The problem is, not all our belief systems are grounded in truth. Unsurprisingly, those un-truths find their way into the artificial intelligence (AI) solutions we create.
We’re all familiar with social, cultural, and gender bias. Amazon has been lauded as the poster child for this. It wasn’t long ago its AI-driven recruiting tool was abandoned for failing to sort candidates for technical positions in a gender-neutral way. In other words, because male developers are historically who Amazon hired, they rose to the top while women were overlooked.
When AI works as it should, it can be transformative, delivering unparalleled efficiency and objectivity. But amid the big “B” biases, which are well documented and addressable, lies a subtler yet concerning issue: sycophancy bias. Often overlooked, this has found its way into AI systems, including Large Language Models (LLMs), compromising the integrity and fairness of results.