Putting brakes on facial recognition & surveillance

How should we think about this super-convenient yet dystopian tech?

Binoculars

America is experiencing cognitive dissonance with facial recognition. The convenience of using our face for the basics - check in, border crossings, unlocking our phones - is fantastic. But, in the land of the free, suddenly everyone has realized that it might mean we’re actually living in just another surveillance society. Maybe we’re more sensitive now, having missed for so long that surveillance capitalism was driving our every digital move without most people having a clue. Maybe it’s the Hong Kong protests where facial recognition has played a key role in the police response while it’s in our face what people do to avoid identification. Maybe it’s that, when it comes down to it, privacy is not dead after all.

Facial recognition technology is deeply concerning and we should be putting the brakes on for a bit. While digital surveillance has grown exponentially, facial recognition hasn’t followed it as far quite yet but it’s close. This doesn’t necessarily mean that an outright ban is the right answer, although there’s certainly a case to be made for that. Brakes could be self-imposed by companies, by local governments or by individual choice.

Like every AI, the validity of its use is incredibly context specific. Facial recognition has spread with very few people realizing that it is a unique technology. Your face is you, readable and identifiable and everywhere in a way that your fingerprint or gait or other forms of identification just isn’t. You’ve even contributed to your biometric identity being easy for anyone, anywhere to use.

Last week, the NYT broke a story about Clearview AI, a startup that has amassed billions of photos scraped from social media. It uses them in an application marketed to law enforcement. This is against the terms and conditions of social networks. Twitter responded with a cease and desist letter but the damage is done — the images have already been scraped. Perhaps the silver lining is that everyone who has ever put up a selfie on Facebook now understands how porous social media really is. Realizing that a social media selfie could end up in a police mugshot is what’s known as “context collapse” and leaves one with a very specific sensation of digital dirtiness.

Amazon’s Ring doorbell doesn’t have facial recognition. Yet. Through Ring, Amazon has created a public/private surveillance net across the US, with very little or no democratic input. Ring shows us something crazy about ourselves — that we can assimilate the violent and terrifying with the cute, funny and frivolous. Ring has become a content platform in itself — a TV channel and new breed of American’s Funniest Home Videos. Who doesn’t want to watch a cat stalk a raccoon or a bear steal chocolate?

How are we meant to think about Ring? It helps prevent delivery theft. It has helped solve crime. People feel more secure. But it also breaks public/private privacy in a very particular way. In our homes, we have a right to privacy. In public, we have to expect that we don’t. But this hasn’t mattered because privacy in public is via obscurity — we don’t expect to be watched. AI breaks down obscurity because now our every move can be watched — by a machine. Ring goes one step further because now, permanent digital surveillance can be literally everywhere, including next door to you. And it is just so, well, ordinary.

But here’s another interesting thing that makes Ring like no other surveillance system. On the inside of the door, you are the user of “luxury surveillance,” while on the outside of the door, you are the victim of “imposed surveillance.” In an essay for the architectural publication Urban Omnibus, Chris Gilliard makes this distinction stark by comparing the experiences of the wearer of an Apple Watch to those of someone forced to wear a court-ordered ankle monitor.

In the digital age, however, the stratification between surveiller and surveilled — and between those who have and don’t have agency over how they are surveilled — is felt beyond the scale of wearable devices. - Chris Gilliard

This is how we need to think of Ring — that now, whenever we leave our houses, we move from surveiller to surveilled. We have to think like someone’s watching and that we have no control over what they see or how they decide to interpret what they see. Was that a hand wave or a threatening gesture? Was that an ironic, playful slap or an indicator of domestic violence? What are those teenagers up to? Should the police be called?

And all those scary and close-call videos intertwined with comedy and cute? The juxtaposition only serves to reinforce the marketing message — that we can only be safe if we are surveilled. The Ring ecosystem is also biased — it skews paranoid which means it’s only a matter of time until Ring adds facial recognition.

Concern over facial recognition and surveillance has grown together. They feed off each other. Our expectations of privacy and our mental model of the constraints that designers put on its use matter. We aren’t worried about facial recognition on our phones. We love how facial recognition means we get more photos of our kids from summer camp...but as long as it’s only photos that counsellors take in group activities. We think that perhaps it’s a good idea to have facial recognition in our kids’ schools…but only as long as it’s only used for identifying people who have been banned from the building. We appreciate how efficient and effective it can be in policing…but we assume we will never be caught as an unfortunate false positive. When facial recognition makes something frictionless, we favor convenience over privacy, just as we do with many other things.

I’ve written before about how facial recognition is not designed for trust. This technology is a fulcrum — the power balance is extreme between the surveiller and the person surveilled. Who gets to choose when and how to see? What inferences are being made and who decides they are meaningful? What is the consequence of failure? Who bears the consequence? Who decides what to keep forever?

In a customer-facing application, one approach might be to start with the goal of making every user a “luxury surveillance” user rather than one of “imposed surveillance.” An Apple Watch gives the user, not just control, but a sense of intimacy. The device feels like an extension of your own body; a tiny, external mind. It’s a luxury to check in on yourself. A customer-facing app that uses facial recognition could be designed with similar intent, going beyond consent and control, and creating a connection with self which reinforces the customer not the company. Unfortunately, any system built on surveillance capitalism doesn’t put the individual’s values first.

It’s hard to see that we can get the horse back-in-the-barn without a revolution in how we see privacy. Current protections are the proverbial “knives to a gun fight.” The more we think about the deeper power dynamic and who wins if we fail to act, the more we will wonder who are really protecting by not acting.


Also this week:

  • The California Sunday Magazine has published a series of articles on facial recognition. Included are a handy series of decision trees for easily figuring out how to avoid facial recognition. For easy reference, we put them on the Sonder Scheme blog.
  • Op-ed in the NYT from Shoshana Zuboff, author of Surveillance Capitalism, specifically focused on privacy and its relationship with human autonomy. A must-read for sure.
  • The Closing Gaps Ideation Game is an interactive ideation game created by the Partnership on AI to facilitate a global conversation around the complex process of translating ethical technology principles into organizational practice.
  • A new look for Google’s desktop search product “blurs the line between organic search results and the ads that sit above them.” Some early evidence suggests the changes have led more people to click on ads, says The Verge.
  • WEF has put together a toolkit for boards to help people ask the right questions about AI.
  • Fun piece on the Wall-E effect from The Next Web.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.