It’s simple: human-designed machines will have human biases.
So this new study from Georgia Tech should come as no surprise. Researchers found* that dark-skinned people were detected by recognition technology 5 percent less than light-skinned people. This disparity didn’t change when factors like time of day and items blocking the camera were controlled for.
Researchers Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern suggest one cause of the difference: the examples used to teach recognition software about what humans look like are light-skinned people 3.5 times more frequently than dark-skinned people.
The finding that dark-skinned people are detected fewer times than light-skinned people is particularly worrisome for autonomous vehicles, which use this recognition software to avoid hitting people walking.
To define dark-skinned and light-skinned people, the researchers used the Fitzpatrick scale, which classifies skin tones in order to predict a person’s likelihood of burning when exposed to UV rays.
One of the best ways to eliminate racism in this kind of technology is to make sure that programming teams are racially diverse, reports Sigal Samuel for Vox.
*This study has yet to be peer-reviewed.
Photo of a busy Toronto street by Ricky Thakar on Flickr’s Creative Commons