- Facial recognition tools trained on light-skinned subjects can struggle recognising ‘other race’ faces.
- A study found algorithms developed in East Asia were more likely to accurately identify East Asian faces over Caucasian, and the reverse was true for Western algorithms.
- Already biased data loaded into an algorithm can potentially further reinforce inequality.
By Stuart Ridley
1. Facial recognition tools tend to be trained on light-skinned subjects
How accurate are facial recognition tools? A study supported by the US National Intelligence Director found algorithms taught by lighter-skinned people, using light-skinned test subjects, are less accurate scanning the faces of people with darker skin tones. In other research, two commercially released facial recognition programs had an error rate of more than 34% with dark-skinned women, labelling them male. Joy Buolamwini, a MIT Media Lab researcher and woman of colour concluded: “A lack of diversity in these training sets leads to limited systems that can struggle with faces like mine.”
2. Like their human masters, machines struggle with ‘other race’ faces.
Babies quickly learn to distinguish the familiar faces of their family, but their limited social circles make it harder for them to tell people of other races apart. Likewise, the country of origin of a facial recognition algorithm influences its ability to tell people from ‘other’ groups apart. Algorithms developed in East Asian countries more accurately identify East Asian faces than Caucasian faces and the reverse is true for Western algorithms, according to a study by the National Institute of Standards and Technologies.
3. Predictive policing: if you look for trouble, you’ll probably find it
Police officers have always patrolled beats where they expect crimes are likely to happen. They might also heavily police people who fit a particular profile. So it’s no surprise human biases influence ‘predictive policing’ software taught to map crime hot spots by analysing historical crime data.
It’s a vicious cycle,” John Chasnoff, program director of the American Civil Liberties Union of Missouri told The Marshall Project, a non-profit news site. “The police say, ‘We’ve gotta send more guys to North County’ because there have been more arrests there and then you end up with even more arrests, compounding the racial problem.”
4. AIs not only reflect biases but further entrench them
What happens if you put biased data into an algorithm? “Algorithms often draw on historical data, which may reflect biases that are not immediately apparent,” noted an October 2018 Algorithm Assessment Report by the New Zealand government.
“There is a risk that algorithms that use biased data could further reinforce inequality.”
In 2016, journalists at ProPublica found that a machine used for automated criminal sentencing in Broward County, Florida, mirrored racial biases in how it determined a risk score that defendants would reoffend or progress to more violent crime.
“The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate as white defendants. White defendants were mislabelled as low risk more often than black defendants,” ProPublica reported.
Northpointe, which developed the software, told ProPublica the basics of its future-crime formula included factors such as education levels and whether a defendant had a job.