Police Use of Facial Recognition Is Accepted by British Court

LONDON — In one of the first lawsuits to address the use of live facial recognition technology by governments, a British court ruled on Wednesday that police use of the systems is acceptable and does not violate privacy and human rights.

The case has been closely watched by law enforcement agencies, privacy groups and government officials because there is little legal precedent concerning the use of cameras in public spaces that scan people’s faces in real time and attempt to identify them from photo databases of criminal suspects. While the technology has advanced quickly, with many companies building systems that can be used by police departments, laws and regulations have been slower to develop.

The High Court dismissed the case brought by Ed Bridges, a resident of Cardiff, Wales, who said his rights were violated by the use of facial recognition by the South Wales Police. Mr. Bridges claimed that he had been recorded without permission on at least two occasions — once while shopping and again while attending a political rally.

The case centers on the use of systems that scan human faces in real time. That is different from technology used by the authorities to find matches from past images, driver’s license photographs or videos. Companies including Apple, Facebook and Google use the technology to identify people in pictures.

In Britain, the technology has been used by the South Wales Police and the Metropolitan Police Service in London. In the United States, at least five large police departments — including those in Chicago, Dallas and Los Angeles — have claimed to have run real-time facial recognition, purchased technology that can do so, or expressed an interest in buying it, according to a Georgetown University study from 2016.

The South Wales Police often use live facial recognition at large events, such as the national air show and rugby matches, according to police records. The cameras scan faces in a crowd, comparing the imagines with a police database of wanted individuals. When the system finds a match, it sends an alert to officers in a command center, who then contact other officers to stop the person.

The court said police had “sufficient legal controls” in place to prevent improper use of the technology, including the deletion of data unless it concerned a person identified from the watch list.

In praising the decision, Alun Michael, the South Wales police and crime commissioner, said the department had turned to new technologies to help make up for budget cuts.

“Preventing crime and supporting safe, confident, resilient communities is the first responsibility of the police, but this has become increasingly difficult” in the face of budget cuts Mr. Michael said in a statement. “That has made it essential to use innovation and embrace technology like facial recognition if we are to have any hope of maintaining police numbers in our local communities.”

Mr. Bridges, whose case was supported by the privacy group Liberty, vowed to appeal the decision. Liberty is seeking an outright ban on the use of the technology.

“This sinister technology undermines our privacy, and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance,” Mr. Bridges said in a statement.

Source link