In April, as part of a series on technology and privacy, The New York Timesran a shocking experiment: With just a few days of effort and $60 spent on Amazon’s commercially available face-recognition service, they were able to identify several pedestrians as they walked through a Midtown park, using park cameras and publicly available photos of New York residents. In May, Congress held the first of several planned hearings on face recognition. There was so much bipartisan agreement about the need for regulation, that legal surveillance specialist Jake Laperruque quipped that the “only conflict seemed to be who could express the most outrage about the technology.” And this month, San Francisco will begin banning its municipal departments from using face recognition technology, becoming the first U.S. city to do so. San Francisco City Supervisor, Aaron Peskin, highlighting the key reasons the city council voted for the ban, said the technology was “fundamentally invasive,” and that “none of us want to live in a police state.”
As the outcry suggests, face recognition technology is frightening for many people, and one might suppose that there is something wrong about it either ethically or legally.
A closer look, though, reveals something more complex. The first thing to note is that the technology doesn’t allow something fundamentally new. We have no objections with police officers being on stakeouts or working undercover to catch criminal suspects. As a society, we have also accepted the widespread use of surveillance cameras. (If we were ever uncomfortable with that, we didn’t react to them as strongly as with face recognition.) And of course, we’re also OK with law enforcement staff scanning hours of surveillance camera footage to track suspects from location to location. All that face recognition technology enables is more efficient scanning and tracking. It offers a quantitative change, not a qualitative one.
To be sure, it’s a dramatic change in quantity. One officer would need at least a few hours to review a day’s worth of surveillance footage while focusing on only a handful of suspects. An automated system could conceivably match every person that appears in a database of millions of drivers’ license photos, across hundreds of cameras, all in real time. Such feats have helped identify a sexual assault suspect in Pennsylvania and a suspected criminal at a music concert attended by 60,000 people in China.
But assuming the right people were caught, these are positive uses of face recognition technology. What makes us uneasy about these technologies, nevertheless? Activism against face recognition tends to come from the civil rights community. In a superb summary of the issues, Georgetown’s Center on Privacy & Technology highlights problems with indiscriminate surveillance (as opposed to specific suspects in criminal investigations), gender and racial bias, possible misuse of the technology, and the potential to chill free speech. They warn of “a world where, once you set foot outside, the government can track your every move.”
Even here, though, it’s not clear that there is a line that face recognition crosses that hasn’t long been crossed. The mobile phones we carry track our location, and law enforcement accesses those records regularly (though it often requires a warrant). Scholars have observed digital technology amplifying inequality in education, public services, and foreign aid. Throughout history, protesters have worn masks to avoid identification, because speaking out in public is inherently risky, with or without face recognition technology.
So again, what makes face recognition technology more frightening than what already exists? I suspect there are at least three psychological reasons.
For most people, the idea that their whereabouts might be tracked by an unknown third party, even if it’s just a machine in a data center, is creepy. We don’t mind police officers walking their beats, but we’d certainly mind if one started following one of us around… even if we were certain we had done nothing wrong. In fact, most of us wouldn’t even want close friends or family members tracking us everywhere. So, it’s the following around that’s the problem, not simply having our location known. To be followed is to be stalked, and stalking is known to take an emotional toll.
Another factor is that, at least for those of us who live in urban or suburban contexts, we enjoy our anonymity. While some people romanticize small, close-knit communities, the prevailing global trend is a movement toward cities. Anonymity grants a certain kind of freedom, freedom from nosy neighbors or freedom from judgment, but face recognition revokes that freedom. It’s telling that the public debate so far is mostly about restricting government agencies’ use of face recognition, not private companies. Possibly, this is because the state has a role in holding us to account, while the private sector cares only about its business interests. The former judges and accuses; the latter just wants to sell us more stuff.
Finally, there might be something about face recognition being a visual technology. The sense of being watched can change our behavior. Being stared at is more discomforting than being eavesdropped on. Theorists critique the “male gaze,” but not obnoxious male listening. Possibly for related reasons, we may be more sensitive to visual invasions of privacy than to auditory ones. Our phone conversations are transparent to telephone operators, and to some extent to law enforcement, but as a society, our objections to those intrusions have been muted.
Personally, I believe that all surveillance technologies should be tightly regulated. Face recognition technology calls attention to a broader class of surveillance tools, so maybe its high creepiness factor is just the alarm bell we need to start a much-needed public dialogue.
[“source=psychologytoday”]