Another paradox: this time, explainability

Autonomy relies more on relative power of the designer than it does on the quality of the explanation.

An abstract image of a book

There seem to be a lot of paradoxes in AI. Perhaps we shouldn’t be surprised. I expect there will be many more. Why?

Paradoxical observations are a natural feature of AI. People build AI with the aim of it being just like us or not being like us at all. At these two extremes we see more of ourselves. In many ways it doesn’t matter whether it’s a perfect reflection of ourselves or a hall-of-mirrors distortion. What AI does is progressively reveal human traits in ways that are natural contradictions.

I’ve now started to look for them because I think they are both intriguing and revealing. Resolving a paradox can provide insights into AI design. In this week’s Artificiality I take a look at a new paradox discovered by top researchers, Solon Barocas from Microsoft Research and Cornell University, Andrew D. Selbst from UCLA and Manish Raghavan from Cornell University. They recently presented their work on hidden assumptions behind explanations in algorithmic systems and its impact on explainability in AI.

Explainable AI is a foundational concept - one that everyone agrees underpins trust in AI. In US law, citizens have a right to an explanation when an algorithm is used to make credit decisions, for example. But beyond certain legal rights, a user-centric explanation and justification is simply good design.

The research reveals the autonomy paradox. It is this; in order to respect someone's autonomy, the designer of the AI must make certain assumptions about what information will be valuable to users. There are many different reasons why the designer cannot know everything that could be important - the realities of life for users are usually not fully imaginable or necessarily even finite. There may not be a clear link between the recommendation made by the AI and an action in the real world. And there's no way for a designer to know how outcomes vary from person to person.

All this means that the choices that designers need to make about what to disclose and how to explain an AI decision can have unintended consequences for users, consequences which could have been avoided if the users had disclosed different information about themselves.

This research reveals a basic power imbalance that is not easily remedied: given the informational position of the designer, there is simply no way to fully maintain commitment to a user's autonomy.

The answer that many reach for (and indeed, perhaps one of the reasons why it's taken until recently for this paradox to be revealed) is to collect more data. Maybe all of it! But this isn't a solution. The very act of collecting more data disrupts a person's autonomy because privacy is fundamental to autonomy. Hence the paradox.

This invites the question: is there any way that giving up more information can be autonomy-enhancing? The answer depends on the power structure and its underlying incentives. For example, we give up highly personal information to professionals all the time in ways that increase our decision making ability. Many of these people are legally required to act in our best interest - for example, lawyers. So here's the insight - resolving the autonomy paradox relies more on the relative power and constraints on the decision maker than it does on the quality of the explanation.

This work highlights the unique difficulties with AI explanations. The designer (and by implication, the decision maker) has no choice but to make decisions about what to explain and how to factor in various assumptions about the real world. The right (or desire) for an explanation from AI sets up unintended power which accrues to the designer - whenever there is ambiguity in an individual's preferences, the designer has the power to resolve the ambiguity however they choose. The authors note:

"This leaves the decision maker with significant room to maneuver, the choice of when and where to further investigate, and more degrees of freedom to make choices that promote their own welfare than we might realize."

Solon Barocas, Andrew D. Selbst, and Manish Raghavan

This concern exists with any complex algorithmic system but machine learning makes it worse, in part due to the nature of explanation itself. Broadly speaking, there are two types of explanation - principal reason and counter-factual.

Principal reason explanations tell users what dominated a decision. Principal reason explanations come from the law (as opposed from computer science) and lack precision as they do not make use of decision boundaries. The intent is education; they are more justification than guidance.

Counter-factual explanations are intended to be more practical and actionable. They provide an "if you change this, then that" type of explanation. Some explanations go so far as to seem like promises - do a thing and get this thing.

In practice, AI explanations can never map clearly to actions. There are simply too many interdependent variables at play. Add to this that new data changes decision boundaries and the model output can drift against the explanation architecture itself. Only the designer (writ data scientist) has enough understanding of the domain and causal features to make the necessary decisions about what matters and how to explain the most relevant relationships.

This leaves us with the role of UX design. Here it gets even more interesting. There are two possible approaches to deal with the autonomy paradox - make more interactive UX for users (the decision subjects) and/or make more interactive UX for the designers (decision makers).

In designing a UX for users, more interactive tools can allow users to explore the effect of making changes to certain features. This gives the user a greater sense of freedom and allows them to maintain autonomy because they can play around; it is their own knowledge of their own constraints and choices that matters.

In designing a UX for themselves, designers can go about finding out even more information about users; what do they like/not like, what other preferences do they have?

Both of these approaches run into the next constraint: revealing enough of the model to be able to reconstruct it. Too much transparency can raise IP, proprietary or trade secret issues. It also incentivizes gaming and, because AI can be more correlation than causation, a successful gaming strategy may not map to a successful real-world outcome. So while a user may cheat the model, they may only be cheating themselves.

Ugh, is there an end to this? Research into the UX of explanation is on-going and we can expect much more to come. From a regulatory perspective, the idea that there is now an "information fiduciary" role is an interesting and viable path to consider. Even if it doesn't become law, it's got legs as human-centric design.


Also this week:

  • I love the Verge, they are doing some great journalism in this space. This week, they published a long read on how AI hasn’t taken workers’ jobs but it has taken the boss’s. Back in 2016, as Intelligentsia Research, we did a lot of work on this idea. Modern algorithmic management is a profound shift - because machines adapt to workers’ behavior, there is significantly less ability to game the algorithm. If you’re an employee, sorting out problems and dealing with algorithmic unfairness can be a nightmare because - let’s face it - part of the objective of these systems is to disempower employees so there is often no opportunity for feedback or control. When it’s done well it can be a good experience - less bias, 100% available, for example. But, as this article details, it’s often deployed in pursuit of hyper-efficiency which reduces humans to machines. Which, in turn, makes humans more replaceable once machines can do those last remaining tasks. My take is, if you want to replace human labor with machines, have the machines manage humans first because it exposes exactly where the machine/human break occurs and what to automate next.
  • Clearview AI was hacked and the company’s client list stolen. Client list includes The Justice Department, ICE, Macy’s, Walmart and The NBA.
  • Facebook has hired WEF’s former head of technology policy to head up a group on “responsible innovation.” The company acknowledges that there needs to be much more thought given to working with product teams early on in the process. Per Axios: “In the wake of its many scandals and amid growing regulatory scrutiny, Facebook is looking to make sure it addresses ethical issues earlier in the design and engineering processes.”
  • Google is cracking down on tracking.
  • Pew Research - tech is going to strain already strained democracy.
  • Facebook’s “download your information” feature doesn’t tell you everything. This research from Privacy International gives an interesting, if typo-filled, update.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.