When the Capitol Hill event happened in early January, many who entered the area posted images and videos of themselves on social media.
Like clockwork, hackers leveraged a bug in Parler to download all of the social media platform's contents. But, much to the hackers' surprise, a lot of the content included geolocation metadata that placed the right-wing posters in the Capitol Hill event just days earlier.
Later, a website called Faces of the Riot placed a massive array of more than 6,000 images of faces at the scene, which were then identified via face recognition software.
Regardless of the political dimension, new attitudes and applications of face recognition software are shedding light on the potential dangers of opposing security to privacy, and where the world will take us as technology evolves for a future becoming increasingly hard to imagine.
A world where anyone can use facial recognition
The person who ran the Faces of the Riot website told Wired that he and a cocreator were working to scrub "non-rioter" faces from the database — including police and press who were on location. The website also has a disclaimer at the top, warning users not to engage in vigilante investigations and encouraging users to report people they know to the FBI (with a link included).
"If you go on the website and you see someone you know, you might learn something about a relative," said the site creator to Wired. "Or you might be like, oh, I know this person, and then further that information to the authorities."
However, this puts ordinary citizens in a position to police and report one another to federal or local authorities with potentially incriminating location details — without the consent of the people behind the faces. It's not hard to imagine situations besides a Capitol riot where the non-official use of facial recognition tech can challenge traditional ideas of privacy when it comes to location, identity, and other digitized forms of personal information — but this isn't always bad.
In the city of Portland, a data scientist and protestor named Christopher Howell is involved in the development of facial recognition systems to use on Portland police officers who aren't identified to the public, according to a blog post on MIT's official website.
This is significant because it puts facial recognition software — a powerful technology conventionally under the exclusive purview of government and private officials — in the hands of citizens in a context where police are often alleged to commit crimes.
Canadian government investigates police use of facial recognition for mass surveillance
While citizens use facial recognition to police the police for alleged criminal behavior during protests, governments are already taking action against police departments for their use of the technology. Canada's privacy commissioners recently declared that Clearview AI's facial recognition is essentially mass surveillance, and urged for the company to delete Canadian faces from its database.
Clearview AI scrapes photos from social media and other public sites for use via law enforcement, according to the Canadian Commissioner, Daniel Therrien — which is "illegal" and engenders a system that "inflicts broad based harm on all members of society, who find themselves continually in a police lineup," he added.
The commissioners also released a report following a year of multi-agency investigation into Clearview's practices — which found the company had collected highly-sensitive biometric data without permission. The report also said the company "used and disclosed Canadians' personal information for inappropriate purposes."
Sweden declares police use of facial recognition 'unlawful'
In Sweden, the local data protection authority called IMY recently fined the police authority more than $300,000 for the unlawful use of Clearview's software — which violated the country's Criminal Data Act.
"IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI," read a press release, according to a Tech Crunch report. "The Police has failed to implement sufficient organizational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act."
"When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require," added the Swedish data protection authority, in the press release.
Minneapolis bans police use of facial recognition software, including Clearview AI
Also this month, the city of Minneapolis voted to ban the use of facial recognition software for its police department, adding itself to a growing number of cities enacting local restrictions on this controversial surveillance technology.
This means the Minneapolis Police Department is banned from using facial recognition technology — including software from Clearview AI, which has cultivated relationships with federal law enforcement agencies, private companies, and several police departments in the U.S. — namely, the one in Minneapolis.
Privacy advocates have raised concern about AI-powered face recognition systems, and how they not only disproportionately target specific disadvantaged communities, but act as a means of continually identifying mass populations whether or not they want it.
Face recognition tracks at-risk people amid COVID-19
During the COVID-19 crisis, global companies have implemented face recognition software on unprecedented scales to identify at-risk people in busy centers of the U.S., China, the U.K., Russia, China, and more. And face recognition software has evolved to identify people wearing medical masks.
An April 2020 survey of 1,255 U.S. citizens showed that 89% of adults are in support of personal privacy rights, with 65% in strong support. This contrasted sharply with the 52% of adults who believed personal privacy trumped the added "security" of face recognition software before the coronavirus crisis.
The notion of visibility layers is relevant to the creation of new and increasingly paradoxical applications of face recognition technology. For a long time, one-way surveillance of the people by governments and companies was the norm, but now "everyday people" are using advanced AI technology for their own interests.
Face recognition transforms the notions of privacy and accountability in paradoxical ways. As the COVID-19 crisis recedes in the next few years, the dangers facial recognition may pose to privacy and human rights will likely multiply. But hopefully, states of emergency will cease to be a constant feature of daily life, and allow a more nuanced rollout of AI-assisted face recognition software.