How Cognitive Biases and Reasoning Errors Fuel AI Hype

The #UnthinkingAIEnthusiast's are on fire these days! Here we provide a skeptic's guide to help you separte the reality from the hype.

An abstract image of fireworks

Separating hype from reality in AI has become more daunting than ever. More than cleantech in the aughts (we should know, we were there). Or previous AI waves (we were there too). While we've experienced tech hype cycles before, the current wave feels like a different beast altogether.

To shield yourself from the relentless enthusiasm of the #UnthinkingAIEnthusiast, it helps to understand the cognitive biases and reasoning errors that fuel the hype.

Benchmarks ≠ Intelligence

Kahneman's work exposed the substitution heuristic, where people answer a simpler (but not complete) question instead of the more complex (but correct) one. In AI, we often substitute "Can AI replace humans?" with "Has AI beaten a human on this benchmark?"

Benchmarks, while useful, shouldn't be seen as the be-all and end-all of measuring humans vs machines. Mapping artificial intelligence onto human intelligence by setting ever-harder benchmarks is like thinking you can reach the horizon by walking far enough. Properly evaluating AI's capabilities requires grasping its complexity, real-world problem-solving tasks, and the ill-defined context of human intelligence.

Stories on Steroids

Availability bias leads us to overestimate the importance of readily available information. With the constant barrage of AI success stories and enthusiasm in the media, it's easy to overestimate AI's capabilities and potential impact, especially in the workplace. Early adopters' enthusiasm and relatively small sample sizes, the heterogeneity of work in the real world (which leads to necessary narrow task assessments in productivity studies) may exaggerate effect sizes.

Just as larger clinical trials tend to result in smaller effect sizes compared to initial, smaller studies, we can expect to see more realistic estimates of AI's impact on productivity as more comprehensive research is conducted.

All Aboard the Hype Train

Anchoring bias causes us to fixate on early productivity gains, some studies showing gains as high as 70%. This sets unrealistic expectations for future improvements. Meanwhile, the bandwagon effect leads people to believe that everyone is using AI, so they must jump on board to keep up.

These biases, along with confirmation bias and good old-fashioned FOMO, supercharge the AI-as-Copilot hype, oversimplifying the complex, non-linear and “loopy” interactions between machine employees and humans.

Exponential Thinking Hits a Brick Wall

The fallacy of extrapolation assumes that exponential growth can continue indefinitely, ignoring resource limitations. In AI, constraints on energy, data, and societal acceptance can't be handwaved away. Ignoring these constraints and assuming perpetual exponential growth is a form of extrapolation bias. It overlooks the potential for unintended consequences in complex systems, like the impact on creative industries and the overall diversity of creative output.

If AI disrupts creative professions, making them economically unsustainable, we could face a resource constraint on novelty—ultimately limiting the resources available for AI to learn from and build upon.

AI Is a Map, Not the Territory

Reifying data, algorithms, and agents as "workers" applies the wrong metaphor to human cognitive work. Human work is characterized by cultural problem solving, collective intelligence, and constant adaptation—not linear, machine-like predictability. AI, while a powerful abstraction, remains a map not the territory. Human reasoning, quirks and all, can't be reduced to a set of algorithms. Neither should it.

Don't be swayed by hype into believing that building intelligence is more fundamental than possessing it ourselves. Even if we like the map, life remains our territory. Awareness of our own cognitive biases—something machines inherently lack—is crucial to surviving AI hype.


Here I've included some charts recently published as part of the Stanford 2024 State of AI report. These are important because they can be interpreted differently depending on the story and the context. As always, charts can lie.

  1. Benchmarks aren't AGI: benchmarks are ill-defined and do not represent how we use our intelligence IRL.
  1. They also give a very different picture when they are made harder:
Source: AI Index Standard HCAI April 2024
  1. Here's a typical data set on use of GenAI to enhance worker productivity. The numbers are impressive but how the improvements translate in the complex work system of organizations is still anyone's guess. Organizations designed for emergence and complex change will have entirely different outcomes than organizations that aren't.
  1. We've all seen and lived through exponential growth during the Covid pandemic. But it's crucial to recognize that when resources are constrained, exponential growth can't continue indefinitely. The key is to identify the real constraints and consider what happens when they start to limit growth. Clearly, cost is a constraint even though the hype will have you ignore for now.
  1. Another constraint on exponential AI growth is human goodwill. As AI diffuses across the economy, it's anyone's guess how fear and pessimism will play out. Grand promises of an AI-powered transcendent life won't be enough; we'll need concrete plans and actions to ensure a stable transition for our societies.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.