AI doesn’t get better than the data it’s trained on. This means that biased selection and human preferences can propagate into the AI and cause the results that come out to be skewed.
In the US, authorities are now using new laws to enforce instances of discrimination due to prejudicial AI, and the Consumer Financial Protection Bureau currently investigates housing discrimination due to biases in algorithms for lending or housing valuation.
“There is no exception in our nation’s civil rights laws for new technologies and artificial intelligence that engage in unlawful discrimination,” said its director Rohit Chopra recently on CNBC.
And many CIOs and other senior managers are aware of the problem, according to an international survey commissioned by Swedish software supplier Progress. In the survey, 56% of Swedish managers stated they believe there’s definitely or probably discriminatory data in their operations today, while 62% also believe or think it’s likely such data will become a bigger problem for their business as AI and ML become more widely used.
Elisabeth Stjernstoft, CIO at Swedish energy giant Ellevio, agrees that there’s a risk of using biased data that’s not representative of the customer group or population being looked at.
“It can, of course, affect AI’s ability to make accurate predictions,” she says. “We have to look at the data on which the model is trained, but also at how the algorithms are designed and the selection of functions. The bottom line is the risk is there, so we need to monitor the models and correct them if necessary.”