AI's strange effects on intuition

Everyone likes to follow their intuition because it’s the ultimate act of trusting oneself.

Abstract art

Intuition plays a huge role in decisions. Even with AI making more decisions and acting on behalf of humans, our desire to use our intuition is not going to go away. As designers, academics and users gain more experience with AI, we get new insights into its impact on intuition in human-machine systems.

In a video presentation this week hosted by the Berkman Klein Center for Internet and Society, Sandra Wachter, associate professor at the Oxford Internet Institute, described the important role of intuition in discrimination law in the EU and how AI disrupts people’s ability to rely on intuition.

Judges use their intuition and common sense when it comes to assessing “contextual equality” to decide whether someone has been treated unfairly under the law. The agility of the EU legal system is described by Wachter as a “feature not a bug.” Courts generally “don’t like statistics,” because they can easily lie and tend to skew “equality of weapons,” handing the advantage to those who are better resourced. “Common sense” is part of the deal but when discrimination is caused by algorithms that process data in multi-dimensional space, common sense can fall apart. Experts need technical measurements that help them navigate new and emergent grey areas.

Finding and fixing discrimination has also relied predominantly on intuition. Humans discriminate with a negative attitude and unintentional bias which creates a “feeling” of inequality. Equality is observable at a human level. We can see that others are getting promoted. We know that everyone gets the same price in the supermarket.

But machines discriminate differently than humans do. People do not know that they haven’t been served an ad for a job. A candidate being assessed in a video game has no hope of gleaning any causal information that might explain a correlation between click-speed and predicted job performance. Data and AI design stratifies populations differently than we traditionally, and intuitively, do.

AI is valued because of its ability to process data at scale and find unintuitive correlations and patterns between people. AI is creating new groups that are being treated unequally but people in these groups have no protection because they do not fit into traditional buckets. There’s no legal protection for “slow clicker” as there is for age.

Machines are creating a world where discrimination may happen more but we sense it less. Our intuitions are not only honed to detect unequal treatment at a human scale, they are honed to traditional classifications such as gender, race and age; things we can perceive within our conscious awareness. Data and correlation about digital behaviors, unrevealed preferences or statistical affinities do not make our alarm bells ring in the same way.

It’s not enough to design AI without considering its impact on human intuition within the human-machine system. This means more than designing for control and human-machine-human hand-offs. It means providing scaffolds for developing intuition and providing machines with more nuanced and contextual definitions of fairness, grey areas and human bias.

In some respects, making humans more like machines is straightforward. We know how to do it because we can test it on ourselves — on our own intuitions! We can design interfaces that provide information that counteracts automation bias and prompts useful skepticism and "system 2" thinking. For example, by giving users information about the confidence an AI doesn’t have in something alongside the confidence it does have (in the same thing) h/t Josh Lovejoy. This can refine someone’s intuition and even impart a sense of an alternative world that might exist beyond the algorithm’s personalized output. We can design AI assistants for experts where it’s clear upfront that the AI is there to account for confirmation bias and not to be a perfectly objective, neutral advisor. Above all, designing for humans to be more like machines is easier because accountability remains so clearly with the human.

Making machines more human is trickier. How can AI designers, as Wachter refers to, “embrace interpretive flexibility” and create more agile machines?

Perhaps first, should they?

There’s an ethical decision to make about how much human judgment and subjectivity we should even try to automate, especially when it comes to fairness. Much of the recent progress — in fact, pretty much all of what we would consider to be “modern AI” — is because machines now learn things that humans evolved to do. Machines can see, converse and even mimic empathy. Part of the puzzle of design is that we don’t know how we do these things so we have fewer intuitions about how to design for them.

How far should we go to design uniquely human skills into machines? It should be table stakes to design a human-machine system that plays to the strengths of both, but it’s harder than it looks because good systems are highly dynamic; a well-designed system should push the boundaries of both human and machine skills rather than leave either of them stuck in the zone of “so-so automation.” Humans are good at working inside the grey areas because this is where we work together to solve problems, advance knowledge and deal with unpredictable events. When examined through this light, we shouldn’t be trying to remove intuition, we should be trying to enhance it.

We can find ways to have AI reveal its knowledge to us. Innovations in explainability should aim to help people refine existing, and form new, intuitions. Ultimately, clever design may even help us see what’s only been visible to the machine, much like viewing an old-fashioned negative. This should be a core goal of explainability in AI and user design.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.