San Francisco is set to become the first city in the U.S. to ban police officers and other government officials from using facial recognition technology. Concerns about police using facial recognition are well-founded. Absent strong restrictions, police use of facial recognition poses a significant threat to our privacy and could hamper First Amendment-protected protests and other legal activities. Amid such concerns, it makes sense to keep the technology away from law enforcement until adequate policies have been implemented.
While San Francisco officials ponder a ban, we should consider if there are policies that could allow for police to use facial recognition without putting our civil liberties at risk or if the potential for abuse is so great that it warrants a ban.
“Facial recognition” is a term that applies to a wide range of systems used to confirm identity via automated image analysis. While these systems have been much-discussed recently, facial recognition has been around for decades. Much of the recent focus on facial recognition is a function of its improved accuracy and proliferation.
Facial Recognition: a Booming Industry
All over the world private businesses, law enforcement agencies, and national governments are using facial recognition systems. At its best, facial recognition can help improve security at banks and schools, help the blind, and make payments easier. But at its worst, it’s an ideal tool for ubiquitous and persistent surveillance.
In China, authorities use facial recognition to conduct surveillance and shame jaywalkers. This technology is a crucial part of one of the most extensive, intrusive, and oppressive surveillance apparatus in history, which the Chinese state uses to target the Uyghur Muslim minority in the western Xinjiang province. While there are many differences between the U.S. and China, we should keep in mind that when it comes to the degree of surveillance the differences between China and the U.S. are legal and regulatory rather than technological.
American citizens and residents may enjoy more civil liberties protections than people living in China, but we should nonetheless be concerned about domestic law enforcement use of surveillance technology. After all, law enforcement agencies are already using facial recognition technology, and manufacturers have expressed interest in improving the technology in ways that could put civil liberties at risk.
According to Grand View Research, we should expect law enforcement to spend more on facial recognition. In 2018, the size of the government “facial biometrics” market was $136.9 million and is expected to be $375 million in 2025.
The scale of law enforcement’s current use of facial recognition is larger than many realize. According to Georgetown’s Center on Privacy and Technology, half of American adults are already in a law enforcement facial recognition network, and at least 26 states allow law enforcement to conduct facial recognition searches against driver’s license and other ID photo databases.
The growing and widespread use of facial recognition is of particular concern given improvements in recognition technology and the private sector’s interest in making surveillance technology more invasive.
In 2017, the law enforcement equipment manufacturer Axon released its technology report. The report includes the following quote from Captain Daniel Zehnder, former manager of Las Vegas Police Department’s body camera program:
[T]he fact that I could potentially walk down the street with a camera in real time, scanning faces, doing facial recognition while it’s recording, sending that data to the cloud for real-time analysis, have that data come back and somebody tell me, “That guy in the red hat, red shoes you just passed, he’s wanted for burglary.” That type of real-time, big data analysis application would be huge.
In 2016, the Department of Homeland Security (DHS) issued a solicitation, asking private companies to build them small, portable Border Patrol drones with facial recognition capability. DHS is also keen on facial recognition at airports.
Civil Liberties and False Positives
Facial recognition systems vary in accuracy. Last year, news that Amazon’s facial recognition tool had misidentified 28 members of Congress made headlines. Eleven of the misidentified members of Congress are African-Americans, promoting more commentary about longstanding and established concerns associated with racial bias and facial recognition. Amazon responded that the test, performed by the American Civil Liberties Union, used a confidence threshold of 80 percent rather than the Amazon-recommended 95 percent.
False positives are a worry, especially considering that police across the country could one day (without restrictions like those in Baltimore) be outfitted with body cameras capable of real-time facial recognition capability.
Law-abiding citizens and residents being needlessly hassled wouldn’t just waste time, it would harm police-community relationships. But even in a world where real-time facial recognition body camera technology is 100 percent accurate, there would be serious concerns. Would protesters be likely to gather if they knew that police with facial recognition body cameras would be observing the protest? What about those attending religious gatherings, gun shows, strip clubs, or abortion clinics?
Concerns about surveillance, racial bias, and speech are clearly on the minds of San Franciscan officials. These concerns look set to result in a ban on law enforcement using facial recognition in San Francisco. Such a ban may well be justified given the state of facial recognition technology and the potential for abuse. However, we should ask whether there are any policies that would allow police to use facial recognition without putting civil liberties at risk.
My own in-progress list of necessary conditions for law enforcement facial recognition deployment are the following:
- A prohibition on real-time capability: Facial recognition technology should be used as an investigative tool rather than a tool for real-time identification.
- Database restrictions: Law enforcement facial recognition databases should only include data related to those with outstanding warrants for violent crimes. Law enforcement should only be able to add data related to someone to the database if they have probable cause that person has committed a violent crime. Relatives or guardians of missing persons (kidnapped children, those with dementia, potential victims of accidents or terrorist attacks) should be able to contribute relevant data to these databases and request their prompt removal.
- Open source/data requirement: The source code for the facial recognition system as well as the datasets used to build the system should be available to anyone.
- Public hearing requirement: Law enforcement should not be permitted to use facial recognition technology without first having informed the local community and allowed ample time for public comment.
- Threshold requirement: Deployment of facial recognition should be delayed until law enforcement can demonstrate at least a 95 percent identity confidence threshold across a wide range of demographic groups (gender, race, age, etc.).*
The Optimal Policy?
Such requirements would make law enforcement facial recognition very rare or perhaps even nonexistent, which are bullets I’m willing to bite. But these requirements aren’t a ban. Given the current state of affairs, it seems appropriate for San Francisco to implement a facial recognition ban. Such a ban would be of reassurance to many San Francisco residents, but we should consider whether bans are optimal facial recognition policies.
*I’m working on a paper on law enforcement facial recognition, which will outline these facial recognition policies in more detail. The final version will almost certainly include edits to the policies outlined above. At the very least they reflect my views at the moment.
Matthew Feeney is a policy analyst at the Cato Institute.
This article was originally published on FEE.org. Read the original article.