AliaksaB / Shutterstock

The Bay Area’s Spy Camera Ban Is Only the Beginning

San Francisco just became the first city to ban use of facial recognition technology by government entities. Oakland may be next.

Update: On Tuesday, May 14, San Francisco’s Board of Supervisors voted to pass the facial recognition ban.

OAKLAND, Calif.—On the first Thursday of every month, about a dozen people meet in a dim, sepia-toned room in Oakland’s City Hall. This is the city’s Privacy Advisory Commission—a group of volunteers from each Oakland district, a representative from the Oakland Police Department and the Mayor’s office, and the city’s chief privacy officer, Joe DeVries. They gather to discuss a host of issues related to the city’s growing use of surveillance technology: how data is used, stored, and shared. Just who, in this tech-saturated city, is tracking whom?

A predecessor to the first Privacy Advisory Commission convened five years ago, after news of a planned surveillance hub surfaced. The federally funded “Domain Awareness Center” was initially intended just for the Oakland port, but city officials proposed expanding it in 2013. The stated purpose of the $11 million expanded project was to fight crime and better respond to potential emergencies. (Oakland had been struggling with an increase in violent crime since 2005.) The proposal sought to blanket the city with cameras, gunshot detectors, and automated license plate readers so that the actions, movements, and connections of suspects could be tracked—and unwanted incidents, preempted. “It’s all about efficiency and automation into the response when it comes to public safety and emergency response,” the city’s then-chief information officer, Ahsan Baig, told GovTech.

But privacy activists raised the alarm about mass surveillance, particularly highlighting the risk that this technology would discriminate against communities of color in poor neighborhoods. Later, their concerns were vindicated when emails revealed that the real purpose of this initiative was spying on protesters, and the scope of the project was rolled back.

In the wake of the DAC debate, Oakland’s City Council assembled an ad hoc committee to set strict data-retention policies and best practices for the center to follow. That committee suggested that what the city really needed was something more permanent: The technology was bound to evolve, and its uses to multiply.

Related: Moscow to Weave AI Face Recognition into Its Urban Surveillance Net

Related: DEA and ICE are Hiding Surveillance Cameras in Streetlights

Related: As Exports of Surveillance Tech Rise, Freer Countries Face a Choice

Oakland’s commission isn’t the only such formal municipal privacy board in the country—Seattle has one, too—but it might be the most aggressive. Under the leadership of local privacy advocate Brian Hofer, it helped pass legislation regulating surveillance in the area, including a transparency ordinance governing the conditions under which the Oakland Police Department cooperates with the FBI and guidelines for “technology impact reports” on each new piece of smart-city surveillance gear the city rolls out.

May’s meeting had a full agenda, and the mood was particularly charged. They discussed the final draft of city-wide Privacy Principles, developed by Berkeley Law’s Samuelson Law, Technology, and Public Policy Clinic. If approved by the City Council next month, this document is meant to act as a lodestar for the city, as it tries to maximize both transparency and privacy—interests that Erik Stallman, the associate director of the Samuelson Clinic, said at the meeting that he believes are not “irreconcilable.”

But the commission were also there to discuss another potential privacy milestone: Thanks to the Privacy Advisory Commission’s work, both Oakland and San Francisco are steps away from becoming the first U.S. cities to ban facial recognition technology, an increasingly common and powerful form of technology that allows computers to identify faces in photos and video footage and match them with law enforcement databases.

Under a Surveillance and Community Safety Ordinance passed last year, new kinds of surveillance technology now must go through a public approval processbefore being deployed in Oakland. But an amendment proposed by Hofer and commission member Heather Patterson would also expressly prohibit city departments—and any public entities that have data-sharing agreements with the city—from using facial recognition technology, or the information captured through it. (The amendment doesn’t ban private use of this technology, however, so private landlords can deploy it in a residential or commercial building.) If the city’s Public Safety Committee approves the amendment this month, the City Council will vote on it in early June.

San Francisco’s ban mirrors Oakland’s: It’s bundled as part of the city’s proposed Stop Secret Surveillance Ordinance, introduced by supervisor Aaron Peskin in January, and it’s up for a deciding vote this week after passing the city’s board of supervisors.

Right now, neither cities’ police departments are using facial recognition technology. San Francisco stopped testing it in 2017. But preemptively prohibiting the technology fits into a broader effort to establish San Francisco and Oakland as “digital sanctuaries”—places where government data collecting and sharing is underpinned by transparency, accountability, and equity. Facial recognition technology is particularly perilous, privacy advocates say, because it can do something unprecedented: turn one’s visual presence into potentially incriminating information that lands someone, without their knowledge, in a database used by law enforcement.

“Facial recognition is the perfect tool and system for oppression of rights and liberties that Oaklanders cherish,” said Matt Cagle, the technology and civil liberties attorney at the ACLU of Northern California, during public comment in support of the amendment at the May meeting. “The harms are disproportionately experienced by people of color, immigrants, and other communities that are historically targeted by the government for oppression.

Because public places where crowds gather under cameras—to celebrate or protest or simply exist—are such fertile territory for harvesting this information, the fight over facial recognition is a uniquely urban one. And the Bay Area, says Hofer, has a unique responsibility to lead the effort to regulate it.

“We’re where all this technology is being developed,” he said. “I think it’s important that—in the home of technology—that we establish these rules and guidelines.”

***

In 2001, the city of Tampa secretly used cameras equipped with facial scanning software on a stadium full of Super Bowl spectators. The system compared the faces of fans with a mugshot database, revealing that 19 people in the crowd of 100,000 had outstanding warrants for misdemeanors.

Such automated computer-assisted technology had been in development for identifying and verifying criminal suspects since the 1960s. But the 2001 incident was the first time many people realized that the ability for a computer to summon names for human faces was no longer in the realm of science fiction. And plenty were not happy about it: When reports of the incident became public, the ACLU penned a letter to Tampa city leaders, demanding a full explanation for the “Snooper Bowl.”

This would prove to be the beginning of an ongoing series of conflicts between civil liberties advocates and law enforcement. As facial recognition technology improved, authorities touted its many public safety applications: the ability to detect passport and other types of identity fraud, identify active shooters and uncooperative suspects, aid forensic investigations, and help find missing children.

In June, for example, Anne Arundel County police used facial recognition to quickly identify the man who killed five journalists at the Capital Gazette newspaper in Annapolis, Maryland. With the shooter in custody but uncooperative, police compared his photo with millions of images from driver’s license photos and mug shots in the Maryland Image Repository System. “The Facial Recognition System performed as designed,” ​Stephen T. Moyer, Maryland’s public safety secretary, said in a statement to the Baltimore Sun. “It has been and continues to be a valuable tool for fighting crime in our state.”

Facial recognition also has applications beyond the realm of crimefighting, proponents say. It can help the blind identify the expressions of people they interact with, and help historians identify soldiers who died in the Civil War.

But privacy advocates warn that this is a tool that can be used to target vulnerable populations. Authorities in China, for example, use it to profile and control the Uighur Muslim minority. In the U.S., researchers found that the government has already been testing facial recognition technology using photos from child pornography, visa applications, and pre-conviction mug shots without consent or notification. Police in Baltimore faced criticism from civil liberties groups after it used facial recognition to identify and monitor protesters after the death of Freddie Gray in 2015.

In 2016, the Center of Privacy & Technology (CPT) at Georgetown Law came out with a report outlining the extent to which facial recognition had proliferated nationwide. The report found that almost 30 states allow law enforcement to scan drivers license and ID databases. “Roughly one in two American adults has their photos searched this way,” the authors wrote. Overall, around 117 million adults are affected by this technology—they’re part of “a virtual, perpetual line-up,” the report says, where the identifying eyewitness is an algorithm.

Using biometric data—measurements or maps of the unique features of the human body—to find people isn’t new, of course. The first national fingerprint database in the U.S. was assembled by FBI director Edgar J. Hoover in the 1920s; in the 1990s, the Bureau’s Combined DNA Index System, or CODIS, allowed law enforcement to use DNA profiles to help identify and locate offenders. But the scope and reach of facial recognition goes further than these databases, which only track a relatively small segment of the population. You can’t secretly fingerprint or collect DNA information from vast groups of people from afar, like you can with facial recognition. As surveillance cameras capture people’s faces, law enforcement can construct an intimate real-time portrait of their movements around the city.

“This technology lets law enforcement do something they literally have never been able to do before,” said Alvaro Bedoya, the founding director of Georgetown Law CPT and co-author of the 2016 report. “People need to understand that this is not normal.”

It’s not just that this technology has a greater reach. Research has shown that facial recognition software is prone to error when it comes to identifying gender and ethnic minorities. In 2018, researchers at MIT tested three types of AI-based facial recognition technology and found an error rate of 0.8 percent when identifying white men, but failed with 34.7 percent for black women.

“You’ve already seen issues with law enforcement and their use of force,” said Sameena Usman, the Government Relations Coordinator for the Council on American-Islamic Relations, during public comment at the privacy commission’s May meeting. “To have this kind of facial recognition software that is inaccurate could lead to these types of wrongful deaths.”

Still, police departments nationwide are eagerly adopting the technology, which is being marketed by companies such as Amazon. That company’s Rekognition software is being piloted in Florida and Oregon, despite questions about its accuracy. (The ACLU used a version of Rekognition to scan photos of the 535 members of Congress against 25,000 public mugshots, and got 28 false hits.) Amazon is also pushing ICE to buy Rekognition.

Local leaders, especially in cities struggling with crime problems, are often taken with the idea that facial recognition-powered surveillance will be a boon for public safety. “In Oakland, we do have a good amount of crime,” Noel Gallo, a City Council member, told the San Francisco Chronicle. “It’s just a form of public safety since I don’t have the police staffing necessary to protect our children and families. If you do the crime, you should also be clearly identified.”

Though the commission approved the facial recognition amendment unanimously, the tension between safety and privacy sparked debate at the May meeting. Oakland Police Department representative Bruce Stoffmacher had been tasked with analyzing the department’s use of remote, live-streaming cameras and suggesting in what circumstances they could be deployed under Oakland’s broader surveillance ordinance. One draft of the exemptions suggested officers should only be able to use the devices for undercover operations, or at public events that are expected to draw 10,000 or more people and national attention.

But Stoffmacher also laid out situations in which the police would want to use these devices in smaller gatherings, such as Golden State Warriors game parades. “It’s still considered a large enough event where OPD is going to try to have officers observing. You could be live-streaming that either to emergency operations center or to the police headquarters building,” he said. “I don’t think it’s supportive of the nature of policing to be able to say this is an exact number.”

Others argued that any cameras deployed by police at large and small events would have a chilling effect on protected free speech and assembly. “There’s nothing benign about a police officer holding a camera at a public gathering,” said Henry Gage III, an Oakland-based attorney and a member of the Coalition for Police Accountability, during public comment at the meeting. “That’s essentially political violence by another name.”

Part of the tension could be eased with clearly articulated use cases, Hofer said: If the police have intelligence that a protest or gathering will include a group with a history of violence—such as the white supremacist Proud Boys, for example—perhaps this technology would be useful.

“The police have a duty to ensure public safety, and they need to be able to operate in some manner in order to achieve that,” he said. “And so some of this is going to need a really fine line.”

***

In a sense, this technology is already out of the box, and a facial recognition ban such as Oakland’s or San Francisco’s will not be able to stuff it back in. Images of people in public can still be shot, collected, and stored; potentially, they can just be sent elsewhere to be analyzed.

“Facial recognition does not require specific cameras to enable it—it’s helped by them, but does not require them,” said Mike Katz-Lacabe, a member of Oakland Privacy. “As long as you have the cameras, you can take that video and send it somewhere else to analyze it.”

This scenario frequently plays out in other realms of smart surveillance

Last year, the local accountability nonprofit Oakland Privacy found that Bay Area Rapid Transit was documenting license plate information from California drivers and storing the records in a database that was also accessible by Immigration and Customs Enforcement. That gave ICE access to 57,000 plate records, allowing them to track the movement of potentially undocumented immigrants and anyone who comes in contact with them.

The worry is not just that ICE has been deporting immigrants without criminal records at record levels. It is that the Department of Homeland Security (DHS) is also keeping tabs on family members, advocates, and supporters. The Intercept recently found that DHS has been monitoring civilians who are protesting the administration’s practice of separating migrant families.

In that context, BART’s transportation data collection could have had more insidious implications—just as live-streaming a protest might. “It may be that a local police department doesn’t really have anything malignant in mind, but if they collect all of the pictures and make it available, other agencies can get ahold of that information and use it for any purpose they want,” said Tracy Rosenberg, an organizer with Oakland Privacy and the director of the Oakland technology organization Media Alliance. “And those purposes aren’t always consistent with what they thought.”

These convoluted surveillance supply chains complicate enforcement of facial recognition bans like Oakland’s, says Steve Trush, a research fellow at UC Berkeley’s Center for Long-Term Cybersecurity and a board member of the nonprofit Hofer chairs, Secure Justice. Even if Oakland or San Francisco’s city departments adhere to it, they’re surrounded by counties and states that don’t. “I can envision some of that friction there could be resolved if enough cities in the county agree to ban facial recognition,” Trush said. “Then maybe you could see maybe it’s time for the county to consider banning it, and maybe the state agrees to an outright ban.”

What’s more, facial recognition technology isn’t just in the hands of official bodies like governments and law enforcement: It’s on your neighbor’s front door. “Smart” video doorbell devices such as those made by Ring and Nest have created a whole army of citizen-recorders, who can potentially ping law enforcement with surveillance of passersby they find suspicious. As these devices become recognition-enabled, it will be difficult for Oakland, or any city, to ensure that this technology was not used somewhere in the data collection process when they receive intelligence.

With more regulations and pressure from consumers, Trush says, perhaps technology companies will choose to develop their products without facial recognition technology, or create alternatives. Already, campaigns launched from within tech companies like Microsoft and Amazon have raised ethical concerns surrounding projects and contracts; in response, those companies have walked back certain initiatives.

Ultimately, achieving a critical mass of local and state regulations on this technology is Hofer’s dream, too. “I do expect this to spread,” he said. “We cracked the door open and are letting others join in.”

But he and other advocates also acknowledge the challenges involved in containing the spread of smart technology that threatens privacy. That’s why the commission is also pushing to pass Oakland’s guiding Privacy Principles, which can inform future policy-making.

“What we wanted was, rather than technology-specific legislation, a process that could handle whatever came down the pike,” said Rosenberg. “Because whatever was upsetting in 2015 is going to be different than what’s upsetting in 2019 or 2023. You don’t want to create a nine-headed hydra situation where you put in regulation that’s technology-specific, and then technology shifts and it’s like nothing ever happened.”