There’s no universal human reaction to AI

Most decisions and most deciders are hybrids. Some machine, some human. The trick is to imagine all the ways that humans figure out ways around, over, and through the machine when what they really want is to make the decision themselves even if it means sacrificing accuracy.

An abstract image of a person

AI-based decision-tools and data-driven decision-making is designed to reduce the variability of human decision-making. People assume that data offers an objective view of the reality and that an AI decision is rational. Decisions get easier with an objective and rational view of reality because the answer is apparent and incontestable. In reality, more data isn't necessarily more meaningful, it’s what someone has chosen to pay attention to, and what’s deemed rational depends entirely on the parameters people care about.

How would you expect individuals to react to a decision recommended by an AI? Would it depend on the context in which the AI made the recommendation? Or the level of confidence the AI expressed in the decision? Or the expertise of the person receiving the recommendation? The biggest factor in how people respond to AI-based decision-making is their own decision-making style.

Even when people are given identical AI inputs, people make entirely different choices. How people use input from AI depends on how they process information, how they regulate their emotions and behavior, and how urgent the decision is. Counter-intuitively, executives who are most rational and data-driven in their decision-making style can be the most likely to reject the algorithm. This is probably because they also place a high value on their own agency and autonomy. Conversely, executives who don’t like to make decisions and tend to procrastinate, are the most likely to delegate to AI, perhaps because it allows them shift responsibility to the machine.

Humans do not have a single, universal response to AI. This means that the accuracy of an AI prediction is only half the story. What matters most is to ask, “what is the purpose of AI in this decision?” AI can reduce variability in human decision-making but it’s important to understand if this is a decision where variability is desired and, if so, how much and why? Where autonomy is valued, AI will simply create subversion. Using intuition feels good, it builds a sense of fluency in judgments and creates an emotional signal called judgment completion. Mastery over the nuance of a situation feels good.

Most decisions and most deciders are hybrids. Some machine, some human. The trick is to imagine all the ways that humans figure out ways around, over, and through the machine when what they really want is to make the decision themselves even if it means sacrificing accuracy.

More resources on this topic:

Article: MIT Sloan Review on the human factor in AI-based decision making. (Paywall)

Book: Noise: A Flaw in Human Judgment
by Daniel Kahneman, Olivier Sibony, and Cass Sunstein

e-Book: How Humans Judge Machines, MIT Press. Print version.


A few more things worthy of your time:

Just as ecosystems need diversity, so does what we pay attention to. A thought-provoking yet practical essay on rewilding your attention.

God, Human, Animal, Machine by Meghan O’Gieblyn. I started this book and couldn’t put it down. Humorous, topical, on-point. A wonderful series of essays on metaphor, meaning and technology.

An in-depth listen on the science of consciousness. Anil Seth on the Brain Inspired podcast. Topics covered include consciousness as controlled hallucination, free will, psychedelics, and whether consciousness is related more to life, intelligence, information processing, or substrate.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.