Twitter is looking into a possible racial bias within its neural network that’s causing the photo preview algorithm to show white faces more frequently than black faces.
Cryptography and infrastructure engineer Tony Arcieri over the weekend conducted what he called a “horrible” experiment highlighting the social network’s natural inclination toward white folks. When Arcieri tweeted photos of former President Barack Obama and Republican Senate Majority Leader Mitch McConnell, Twitter almost exclusively cropped the images to show McConnell. Only when the colors were inverted (making skin color a moot point) did Obama appear.
“We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing,” spokesperson Liz Kelley wrote in response to Arcieri. “But it’s clear that we’ve got more analysis to do.”
Other users tested the hypothesis, reversing photo and name order and changing the background color—to no avail; giving Obama a higher-contrast smile did the trick for producer Kim Sherrell. Scientist Matt Blaze, meanwhile, noticed that different apps and platforms (web vs mobile vs Tweetdeck) yielded different results.
“Twitter is just one example of racism manifesting in machine learning algorithms,” Arcieri tweeted, citing articles about racist technology in healthcare and policing.
Fellow social media firm Instagram recently turned its attention inward, looking more closely at how its products and policies treat equality. In a June blog post, CEO Adam Mosseri underlined the irony that “we’re a platform that stands for elevating Black voices, but at the same time Black people are often harassed, afraid of being ‘shadowbanned,’ and disagree with many content takedowns.”