Facial Recognition Has Already Reached Its Breaking Point
As facial recognition technologies have evolved from fledgling projects into powerful software platforms, researchers and civil liberties advocates have been issuing warnings about the potential for privacy erosions. Those mounting fears came to a head Wednesday in Congress.
Alarms over facial recognition had already gained urgency in recent years, as studies have shown that the systems still produce relatively high rates of false positives, and consistently contain racial and gender biases. Yet the technology has proliferated unchecked in the US, spreading among law enforcement agencies at every level of government, as well as among private employers and schools. At a hearing before the House Committee on Oversight and Reform, the lack of regulation garnered bipartisan concern.
"Fifty million cameras [used for surveillance in the US]. A violation of people's First Amendment, Fourth Amendment liberties, due process liberties. All kinds of mistakes. Those mistakes disproportionately affect African Americans," marveled Representative Jim Jordan, the Republican of Ohio. "No elected officials gave the OK for the states or for the federal government, the FBI, to use this. There should probably be some kind of restrictions. It seems to me it's time for a time-out."
Lily Hay Newman covers information security, digital privacy, and hacking for WIRED.
The hearing's panel of experts—an assortment of legal scholars, privacy advocates, algorithmic bias researchers, and a career law enforcement officer—largely echoed that assessment. Most directly called for a moratorium on government use of facial recognition systems until Congress can pass legislation that adequately restricts and regulates the technology and establishes transparency standards. Such a radical suggestion might have seemed absurd on the floor of Congress even a year ago. But one such ban has already passed in San Francisco, and cities like Somerville, Massachusetts, as well as Oakland, California, seem poised to follow suit.
"The Fourth Amendment will not save us from the privacy threat posed by facial recognition," said Andrew Ferguson, a professor at the University of the District of Columbia David A. Clarke School of Law, in his testimony. "Only legislation can respond to the real-time threats of real-time technology. Legislation must future-proof privacy protections with an eye toward the growing scope, scale, and sophistication of these systems of surveillance."
A series of recent incidents and revelations have shown just how widely the technology has been adopted, and how problematic its shortcomings could become without oversight and increased transparency into who uses the technology and how those systems work. A report last week from Georgetown Law researchers, for example, showed that both Chicago and Detroit have purchased real-time facial recognition monitoring systems—though each city says that it has not used the platforms. An additional Georgetown report offered evidence of facial recognition misuse and manipulation by the New York Police Department. Officers reportedly fed sketches into facial recognition systems, or photos of celebrities they thought resembled a suspect—Woody Harrelson, in one example—and tried to identify people off of those unrelated images.
Alvaro Bedoya, Georgetown University
Separately, in April a facial recognition system incorrectly flagged Brown University student Amara Majeed as suspect in Sri Lanka's Easter church bombings. And on Wednesday, the Colorado Springs Independent reported that between February 2012 and September 2013, researchers at the University of Colorado at Colorado Springs took photos of students and other passersby without their consent, for a facial recognition training database as part of a government-funded project. Similarly, NBC News reported at the beginning of May that the photo storage and sharing app Ever quietly started using photos from millions of its users to train a facial recognition system without their active consent.
"We and others in the field have predicted for a long time that there would be misidentifications. We predicted there would be abuse. We predicted there would be state surveillance, not just after-the-fact forensic face identification," says Alvaro Bedoya, the founding director of Georgetown Law's Center for Privacy & Technology. "And all those things are coming true. Anyone who says this technology is nascent has not done their homework."
At Wednesday's House hearing, witnesses similarly emphasized that facial recognition technology isn't just a static database, but is increasingly used in sweeping, real-time, nonspecific dragnets—a use of the technology sometimes called "face surveillance." And given the major shortcomings of facial recognition, especially in accurately identifying people of color, women, and gender nonconforming people, the witnesses argued that the technology should not currently be eligible for use by law enforcement. Joy Buolamwini, a Massachusetts Institute of Technology researcher and founder of the Algorithmic Justice League, says she calls the data sets used to train most facial recognition systems "pale male" sets, because the majority of the photos used are of white men.
"Just this week a man sued Uber after having his driver's account deactivated due to [alleged] facial recognition failures," Buolamwini told the Committee on Oversight and Reform on Wednesday. "Tenants in Brooklyn are protesting the installation of an unnecessary face-recognition entry system. New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools. Our faces may well be the final frontier of privacy."
Representatives across the political spectrum said on Wednesday that the committee is ready to develop bipartisan legislation limiting and establishing oversight for facial recognition's use by law enforcement and other US entities. But tangible results at the federal level have been scarce for years. And advocacy in the private sphere has faced major hurdles as well. On Wednesday, for example, Amazon shareholders rejected two proposals related to reining in use of the company's controversial Rekognition facial identification software to allow for research into privacy and civil rights safeguards.
Still, with facial recognition's ubiquity becoming increasingly apparent, privacy advocates see 2019 as a potential turning point.
"I think it’s too late to stop the proliferation of facial recognition tech. Both government and corporate actors are using it in new ways every day," says Tiffany Li, a privacy attorney at Yale Law School’s Information Society Project. "Hopefully we reach a critical point where we start really working on those problems in earnest. Perhaps that moment is now."