Looks like racism and bias is turning into biased code..making a neutral thing like software a weapon of white supremacy.
New research out of MIT’s Media Lab is underscoring what other experts have reported or at least suspected before: facial recognition technology is subject to biases based on the data sets provided and the conditions in which algorithms are created.
Joy Buolamwini, a researcher at the MIT Media Lab, recently built a dataset of 1,270 faces, using the faces of politicians, selected based on their country’s rankings for gender parity (in other words, having a significant number of women in public office). Buolamwini then tested the accuracy of three facial recognition systems: those made by Microsoft, IBM, and Megvii of China. The results, which were originally reported in The New York Times, showed inaccuracies in gender identification dependent on a person’s skin color.
Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females.
read more
Related