Facebook deepfake ban techno-solution distraction

This week Facebook announced a new policy banning deepfake videos.

An abstract image of a do not enter sign

This week Facebook announced a new policy banning deepfake videos. Facebook’s vice president of global policy management, Monika Bickert, said videos that have been edited “in ways that aren’t apparent to an average person and would likely mislead someone” and were created by artificial intelligence or machine learning algorithms would be removed under the new policy.

Ok, this is good, but it’s nowhere near good enough, a sentiment echoed by many. It’s another example of Facebook’s business strategy being driven by its AI strategy, rather than any genuine change in the company’s policy for reducing the pollution level of information on the platform.

Deepfakes are definitely a concern but, as this paper from Data and Society points out, they are only part of the story. “Cheap fakes” use conventional technologies such as speeding, slowing, cutting, re-staging or re-contextualizing footage and are far more accessible to the average person.

Deepfakes and cheap fakes exist on a spectrum from highly technically complex, and therefore requiring a lot of technical expertise, to requiring almost no technical expertise. From most technical to least according to Data and Society:

Deepfakes:

  • Virtual performances: Recurrent Neural Networks, Hidden Markov Models, Long Short Term Memory Models, Generative Adversarial Networks
  • Voice synthesis: Video Dialogue Replacement Models
  • Face swapping and lip syncing: FakeApp, After Effects

Cheap fakes:

  • Face swapping using rotoscoping: Adobe After Effects, Adobe Premiere Pro
  • Speeding and slowing: Sony Vegas Pro
  • Face altering and swapping, speed adjustment: Free and in app such as Snap
  • Lookalikes, relabelling and recontextualizing: relabeling of video and in-camera effects

It should be obvious how narrow and techno-centric this new policy will be in practice - it only captures the top line of this list. And while malicious fake content is dangerous, it may not be inherently more dangerous than less sophisticated doctored media. Cheap fakes can cause just as much havoc as more technically sophisticated deceptive media. One could argue that cheap fakes can be even more engaging - grabbing attention because they are so clearly in the uncanny valley or are distinct, unusual, curious or amusing, manipulate confirmation bias or incite a sense of urgency to act. And it’s the engagement that matters for Facebook - amplification of engaging content is what the AI does.

Bickert also testified this week before the Subcommittee on Consumer Protection and Commerce on manipulation and deception in the digital age. When questioned she acknowledged that on many occasions Facebook is slow to act: slow to find malicious content, slow to get information to fact checkers, slow to remove. And the inability of people to react as open content is amplified at immense speed and scale is the real problem. As Dr Joan Donovan pointed out in her testimony; “the platform’s openness is now a vulnerability.”

Banning deepfakes is good - certainly a good technical challenge - but we shouldn’t kid ourselves that it is indicative of any real change regarding information safety. What’s really needed is for Facebook to decouple amplification of content from the content itself. Ultimately this is the only way to reduce the risks that come with the market in deceptive information and the attention economy.


Elsewhere this week:

  • Interesting research from Microsoft on how AI ethics checklists can be best used and are most commonly misused, a Sonder Scheme article. An AI ethics checklist can act as a "value lever" and make it acceptable to reflect on risks, raise red flags, add extra work and escalate decisions. It should not be used in a simple yes/no fashion that can turn nuanced ethics into simple compliance.
  • Weird stuff at CES. Samsung unveiled its artificial humans - calling its Neons a “new form of life.” Oh the hubris. This video of a CNET journalist interacting with one is worth a view. I couldn’t decide if the marketing people actually believe the schtick or whether they felt they just had to stick with it. Neons are nothing compared Soul Machines’ digital humans. The company wrote this article in response to the fuss, calling Neons “digital puppets.”
  • Speaking of Soul Machines… the NZ company just raised $40m.
  • Super interesting read on Medium about Youtube’s eco-system of far-right media. It’s not all about the recommendation algorithm - it’s a complex mix of social, celebrity and multiple algorithms.
  • Fascinating research from Google Health on the way expert medical practitioners work with AI. This is a bit of a “geek out” paper but it’s one of the most interesting pieces of research I’ve seen on how to think about human-machine collaboration in medicine. As AI assists medical decision making, the way models update and clinical practices change will be a lot more complex.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.