With Rekognition, Amazon could squander trust

Facial recognition is facing a backlash and Amazon is an easy target.

Abstract image of faces

Facial recognition technology has valuable uses but it’s facing a backlash. Surveillance and privacy are headline issues and there’s been a sharp increase in awareness of facial recognition technology - its presence and its downsides.

Amazon’s facial recognition product, Rekognition, is part of the reason - it’s making headlines as it gets adopted by commercial and government organizations. Amazon’s Rekognition technology is cheap and easy. In one law enforcement application, Washington County spent about $700 to upload its first big haul of photos, and now, for all its searches, pays about $7 a month. Literally anyone can use it - even your kid’s summer camp.

But facial recognition is no ordinary technology which means that Rekognition is no ordinary product. For vendors of facial recognition technology, it matters how customers use it. Amazon’s laser focused, customer-first strategy might reveal a potential blindspot - there’s an ethical skin around this product that Amazon isn’t getting right, which is creating the flow-on effect of sensitizing people to facial recognition as a privacy-invasive technology.

Rekognition can be misused or overused (or perceived to be) and Amazon hasn’t done much to prevent this. The company’s product strategy is “all care, no responsibility” which isn’t good enough. Rekognition needs more than a safety warning and an instruction manual. While Amazon provides guidelines for accuracy and deployment, it hasn’t set any kind of constraints that might signal that the industry can self-regulate and assure responsible use. (Both Microsoft and Google have taken far more aggressive stances on the technology, with Google not selling it at all). Amazon has now called for the government to regulate. As you’d expect, it’s writing the draft rules.

Amazon has a lot to lose. It has long stood out as the tech company that people trust the most. It wants to use this position to disrupt markets with even more valuable and personally sensitive data - finance and health, for example. But could perceptions of Rekognition’s wide-spread use tip the balance and precipitate a full-on privacy revolt?

Facial recognition is a unique technology with unique risks. It enables mass surveillance and this alters people’s behavior. Data are gathered in a passive, unintentional and unavoidable way. When we walk around in public we expect to be caught on security cameras but we don’t expect this footage to be monitored in real-time, for us to be personally identifiable, for our face to be available forever and for our physical presence to be searchable at scale. Our experience of the world includes a significant amount of forgetting - we are excellent at identifying faces but our brains have evolved so that we forget the vast majority of people we ever encounter. Forgetting creates a degree of obscurity that is a vital component of our privacy. Facial recognition completely disrupts this - we are lurched into a world where our face can be found wherever we happen to be. Or more correctly, wherever the machine thinks we are.

It’s even worse when people know it’s there - when you’re being watched but you don’t know by whom. This changes your behavior a lot. If you are identified as a criminal when you are not, the burden of proving you are innocent shifts to you. Another problem is that it shifts the incentives for law enforcement - now when you walk down the street with your face covered, an officer has the built-in incentive to think of you as a criminal avoiding surveillance. Never mind your right to cover your face because it’s cold or because you want to. A great example is a trial of facial recognition surveillance by London police in May 2019. In this video from the BBC, a man who did not want his face captured pulled up his sweater to partially cover his face. He was stopped by police. This annoyed him. “I told them to f*** off, basically,” he said to the film crew. The police fined him for disorderly behavior. They said he had no right to cover his face.

Amazon may have unwittingly escalated awareness and, with it, public concern with its Ring doorbell law enforcement partnerships. The Washington Post has unveiled these relationships which Ring has tried to keep quiet.

To be clear, Ring doesn’t have facial recognition in its product but it did apply for a patent for technology that can alert when a person designated as “suspicious” is caught on camera. If/when this feature is turned live, Amazon would have built a privately run surveillance system outside any democratic process. An example of a private infrastructure disguised as a public square. And the real problem is that they’re marketing it as just another consumer convenience product. If people conflate Ring’s capture of faces with facial recognition capabilities (which isn’t impossible), then Amazon could have a real image problem.

On top of all this, Amazon recently announced that Rekognition can now recognize emotions, including “fear.” This is a time bomb. Earlier this year, a major scientific study made it clear - there is no scientific justification for using facial expression to infer emotional state. It is now simply not ethical to market this capability in a product that many non-expert users can deploy effortlessly at scale.

Amazon remains the most trusted tech company. But we are in unique times - the intersection of a big tech backlash, polarized societies, rising inequality, trust-sparse public spaces and high uncertainty. As Tony Blair said on a recent podcast, “when people are feeling optimistic, they look for opportunity. When they are feeling pessimistic, they look to blame.” When it comes to privacy, people are looking for someone to blame.

Amazon’s facial recognition products could be an easy scapegoat, one that cements a role for regulators and puts Amazon’s trust leadership at risk. Amazon needs consumer trust right now - it is critical for entry into highly data sensitive markets and provides a buffer from heavy anti-trust regulation. The era of blind trust in tech is over.


Also this week (lots):

  • If you’re interested in the back-and-forth that goes on between AI research and neuroscience, Google released some fascinating (but early) work. By using information encoded in the timing of signals (something brains do that helps make them so energy efficient and compact), the team was able to create a spiking neural network that mimicked human decision making. Here.
  • Described by the author as “not very silver lining-y,” Antisocial by Andrew Marantz is an extraordinary work. He expertly tells the story of how the incredible AI built by social media platforms have been perfect tools for online extremists. If you don’t have time to read a book, try this hour-long interview with Kara Swisher.
  • Great article by Alison Gopnik in the Wall Street Journal (subscription) about AI research that’s driven by understanding how babies learn. Gopnik’s work in early childhood learning and how it’s being applied to AI is a real bright spot and highlights how much there is to appreciate about humans.
  • NYT op-ed (metered paywall) on the call to ban facial recognition. Woodrow Hartzog is one of the most insightful and no-nonsense thinkers on this technology. I’d highly recommend his book, Privacy’s Blueprint, for a deeper dive into privacy and digital technology.
  • On-topic keynote from Lorien Pratt on how AI needs to move beyond model construction to encompass design, strategy and governance. She does a nice job at explaining the future direction of AI and it’s very much aligned with how we think at Sonder Scheme.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.