Facebook is an AI ethics fail

If you’re at all enmeshed in the tech press, the big issue for the last couple of weeks has been Facebook’s inability to get out of the huge hole it’s dug for itself around misuse of its platform, political ad targeting and lies.

Abstract image of Lady Justice falling down

If you’re at all enmeshed in the tech press, the big issue for the last couple of weeks has been Facebook’s inability to get out of the huge hole it’s dug for itself around misuse of its platform, political ad targeting and lies. It’s fascinating to watch this through the lens of AI governance and ethics because AI ethics has something to say about it all.

Transparency is necessary but not sufficient

Facebook says that its tools for searching political ads are the solution. By being able to search for any political ad, anywhere in the world, the trifecta of problems - preserving free speech, neutrality of platforms and accuracy - are solved. But all this has done is more clearly shift the responsibility for analysis, interpretability and fact-checking away from Facebook. In this case transparency is a false flag, not even a partial solution. The ethical AI response might be for Facebook to interrogate the information, to discover and report bias, inaccuracies and misuse directly. Transparency is one important part of AI but it’s worthless if it’s not part of an ecosystem of explainability, responsibility, accountability and trust.

The people affected should be central

How algorithms are trained and how people are alert to potential bias and unintended harm are a critical part of AI development. Data scientists currently shoulder this burden and it’s going to be a huge shift to transition some of this front line work to product managers and business leaders and others who can involve those most affected by AI. This is central to many AI ethics programs because it recognizes that knowledge is created in the front-lines and, done right, this is where AI and humans can constantly work in a dynamic and mutually reinforcing cycle of improvement.

Facebook’s content moderation is an important front-line activity. At a recent congressional hearing, Representative Katie Porter of California asked Zuckerberg whether he would "be willing to commit to spending one hour per day for the next year watching these videos and acting as a content monitor." The Facebook CEO replied by suggesting that he was "not sure that it would best serve our community for me to spend that much time" reviewing questionable content.

Zuckerberg betrayed that, not only does he not think his time is well spent understanding how people deal with the most painful experiences that his users are exposed to, he also said to everyone that he doesn’t value understanding how his machine employees - his algorithms - are trained.

There’s no way of knowing how the content moderation process affects the Facebook algorithm but we do know that Facebook is very focused on having its AI be the main mechanism for dealing with quality issues, so it’s bizarre that Zuckerberg doesn’t see any value in the hands-on experience. Sure, it probably would not reveal anything of statistical significance but it might make him seem a bit more connected to the worst of what free speech means for the people who are the victims of digital harm.

Ethics is about confronting consequences of humans acting on humans. Ethical AI is about confronting consequences of machines acting on humans. It’s a process that’s full of ambiguities. It’s complex because intelligence is complex. The crazy thing is that Zuckerberg, who isn’t afraid to spend hours with engineers, would be able to offer a unique and valuable perspective if he decided it was worth his time.

AI can’t be neutral

The argument on both sides is stuck. Facebook’s AI doesn’t guarantee free speech at all. It guarantees that whoever pays to place an ad has a high probability of sending it to someone whose preferences can be nudged in a particular direction.

AI amplifies a digital advantage. Which means that ethics in the world of AI need to be that much more effective. If an untruth gains a small advantage, AI will make it real because it makes it big. That isn’t the same as giving everyone a voice because giving everyone a voice can give one person a million voices before anyone knows it’s damaging. It’s this algorithmic advantage that pits emotionally activating information against the slow process of human judgement and fact-checking. Our systems are not well-equipped to deal with this and are unlikely to change anytime soon.

Human interaction has been shaped by millions of years of evolution. Our skills are honed for small groups and to be face-to-face. We need a new ethics - one that recognizes new forms of harm when platforms manage millions of human connections by algorithm, maximized for commercial gain.

The ethics of AI are the ethics of human-machine connection at scale, across infrastructures designed to optimize, at speeds beyond any human reaction time. Facebook’s absolutism and immovability conjures dystopian images of humans trapped beneath a robot overlord. AI ethics is a new and dynamic field but it had better move fast because algorithms in the hands of the unethical move faster.


Elsewhere this week.

  • Article from Defense One about the Pentagon’s AI ethics statements, saying they are “actually pretty good.” A lot of emphasis on human judgment.
  • Thought-provoking essay from Stanford’s Human Centered AI group on human-in-the-loop AI development.
  • An oldie (2016) but a goodie - superintelligence, the idea that eats smart people.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.