For better or worse, U.S. police used facial recognition 1 million times

According to a recent BBC exclusive, American police have used facial recognition software, like Clearview's AI, 1 million times.
Christopher McFadden
Male officer works on a computer with surveillance CCTV video
Male officer works on a computer with surveillance CCTV video


Facial recognition company Clearview AI has conducted nearly one million searches for the US police and has amassed a database of 30 billion images taken from social media platforms, according to founder and CEO Hoan Ton-That in an interview with the BBC. Clearview allows law enforcement customers to upload a photo of a face to find matches in its database, which it then links to where those images appear online.

It is considered one of the world's most powerful and accurate facial recognition companies. Although the American Civil Liberties Union took Clearview to court in Illinois for breaking privacy laws, the company can sell its services to police forces. Critics argue that the software puts everyone in a "perpetual police lineup," a claim police in Miami rejects. Clearview is banned in several US cities, including San Francisco, Portland, and Seattle.

The BBC says the company has been fined many times in Australia and Europe for breaking privacy laws. There have also been claims against its software of mistaken identity, where police used facial recognition and arrested or charged the wrong person. Advocates for civil rights want independent experts to review the algorithm and for police forces that use Clearview to disclose when they use it. While Clearview points to research that shows a near 100 percent accuracy rate, these figures are often based on mugshots.

But it is important to note that the accuracy of Clearview depends on the quality of the image fed into it. Kaitlin Jackson, a defense attorney in New York who works to stop the police from using facial recognition, says that the idea that Clearview is "incredibly accurate" is "wishful thinking."

That being said, there are times when Clearview has also helped people prove their innocence. In one case, Mr. Conlyn was charged with vehicular homicide after a crash in which the driver died, and he was pulled from the wreckage by a passerby who left without making a statement. Police suspected Mr. Conlyn of driving, but he claimed to have been the passenger. His lawyers used Clearview to identify the passerby from police body cam footage, and the AI system found him in three to five seconds.

The witness subsequently stated that Mr. Conlyn had been the passenger, and charges were dropped. While some believe Clearview has proven effective in cases like this, others argue that the price is too high for civil liberties and rights.

However, Mr. Ton-That does not want to testify about Clearview's accuracy in court, stating, "We don't want to be testifying about the algorithm's accuracy... because the investigators are using other methods to verify it." He also believes prosecutors and defense attorneys should have the same access to the technology.

"Clearview is a private company that makes face prints of people based on their photos online without their consent," Matthew Guaragilia from the Electronic Frontier Foundation told the BBC. "It's a huge problem for civil liberties and civil rights, and it needs to be banned," he argues.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board