Many Facial-Recognition Systems Are Biased, Says U.S. Study

The majority of commercial facial-recognition systems exhibit bias, according to a study from a federal agency released on Thursday, underscoring questions about a technology increasingly used by police departments and federal agencies to identify suspected criminals.

The systems falsely identified African-American and Asian faces 10 times to 100 times more than Caucasian faces, the National Institute of Standards and Technology reported on Thursday. Among a database of photos used by law enforcement agencies in the United States, the highest error rates came in identifying Native Americans, the study found.

The technology also had more difficulty identifying women than men and elderly people more than middle-aged people.

“One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the F.B.I., Customs and Border Protection and local law enforcement must immediately halt the deployment of this dystopian technology.”

The federal report confirms earlier studies from M.I.T. that reported that facial-recognition systems from some large tech companies had much lower accuracy rates in identifying the female and darker-skinned faces than white male faces.

“While some biometric researchers and vendors have attempted to claim algorithmic bias is not an issue or has been overcome, this study provides a comprehensive rebuttal,” Joy Buolamwini, a researcher at the M.I.T. Media Lab who led one of the facial studies, said in an email. “We must safeguard the public interest and halt the proliferation of face surveillance.”

The National Institute of Standards and Technology tested 189 facial-recognition algorithms from 99 developers, representing the majority of commercial developers. They included systems from Microsoft, biometric technology companies like Cognitec, and Megvii, an artificial intelligence company in China.

The agency did not test systems from Amazon, Apple, Facebook and Google because they did not submit their algorithms for the federal study.

Source link