Key Points:
- Max Bennett, an AI entrepreneur, tells an engaging story of the brain's evolutionary journey towards intelligence and parallels for AI development. The core idea is that until AI systems replicate each part of our brain’s evolution, AI systems will fail to exhibit human-like intelligence.
- Bennett argues that to unlock AI's full potential, systems should mirror biological evolution including key "breakthroughs" like learning from emotional valence, forming temporal predictive models, simulating possible scenarios, attributing mental states, and accumulating knowledge through language.
- Biological intelligence progressed in 5 key "breakthroughs"—steering, reinforcing, simulating, mentalizing, speaking. These built progressively more advanced functions like imagination and theory of mind.
- Open questions remain around precisely which higher cognitive capabilities biological intelligence possesses that advanced AI still lacks. And how can we formally evaluate more complex facets like curiosity, imagination and metacognition beyond human benchmarks?
- He surmises a speculative "6th Breakthrough" of a decoupling of intelligence from biology.
- We question however whether the most revolutionary AI systems may manifest differently than human mimicry or biology all together—embracing alternate pathways reflecting the technology's non-biological essence.
In A Brief History of Intelligence, Max Bennett sets out to tell the story of the brain's evolution towards human intelligence and the parallels AI must achieve to replicate or even exceed our cognitive abilities. Bennett is an entrepreneur who has cofounded and lead multiple AI and technology companies. This book, as Bennett puts it, was the one he wanted to read himself—a deep dive into the complexity of our cognitive evolution.
Bennett's narrative is firmly rooted in neuroscience. He explores AI not just as a technology but as a potential equal or superior to human intellect. He extends his exploration to the future of human evolution, envisioning a fusion of biological and machine intelligence, going so far as to speculate on a new species of human.
Bennett's core argument in A Brief History of Intelligence suggests that the fullest potential of AI can only be realized if its development parallels biological evolution. These philosophies have been articulated by AI gurus such as Hinton and LeCun. For AI to reach the levels of natural intelligence, it might need to traverse similar evolutionary paths. Just as natural intelligence has evolved through a process of trial and error, adaptation to environments, and learning from experiences, AI could also benefit from a similar developmental journey, gradually acquiring complex skills and adaptive abilities.
He uses a parable to illustrate this point: An 1800s inventor, transported to the future and experiencing a 747 flight, would return to design flight based on visible elements like wings and engines (even seats!) but would miss the underlying principles of flight. Similarly, in AI, we often focus on designing intelligence without fully understanding the fundamental reasons and complexities of its biological counterpart.
Bennett's work is a compelling read. I’ve been fascinated with evolution since I was a kid and I both enjoyed his storytelling and appreciated his simplifications. Bennett has successfully highlighted, curated, compressed, and framed a fascinating and easy-to-grasp narrative of the evolutionary journey to intelligence.
Particularly impressive is his storytelling of five biological intelligence breakthroughs. I was initially skeptical, thinking this would be overly simplistic. But not so. His "breakthrough" structure makes sense:
Breakthrough #1: Steering. By categorizing stimuli into good and bad, and learning to turn towards good and away from bad (jargon: approach versus avoid), the first brains became affective (emotional) templates for animals: pleasure, pain, satiation, and stress.
Breakthrough #2: Reinforcing. By learning to repeat behaviors that previously led to positive valence (pleasure) and to inhibit behaviors that led to negative valence (unpleasant), animals invented reinforcement learning which gave rise to cognitive features such as omission learning, time perception, curiosity, fear, excitement, disappointment, and relief.
Breakthrough #3: Stimulating. By mentally simulating stimuli and actions, animals were able to remember past events and consider alternatives in a counterfactual manner. In this way, imagination formed the basis of the one thing evolution cares about more than almost anything: movement in the physical world.
Breakthrough #4: Mentalizing. By evolving the capability to model one’s own mind, primates could apply this model to anticipating a future need, apply this to others by modeling others’ states (theory of mind), and learn skills by observation.
Breakthrough #5: Speaking. By evolving language, we have been able to tether our inner simulations together and accumulate thoughts over generations.
The "onion layer" approach in the context of evolution and neuroscience is a sound way to understand the development of intelligence. It starts with the simplest mechanisms, like the reflexes seen in early worms, which lay the groundwork for trial-and-error learning. As we move through the layers, adding elements like intention, imagination, and the ability to create explanations, we see how these basic functions evolve in mammals and primates, ultimately forming the foundation of advanced intelligence.
The most compelling parts of his storytelling are when there are real, practical transfers of techniques or algorithms between biological and artificial intelligence. For example, the discovery of temporal difference learning and the mechanism of dopamine as a signal for expectation. This is a fascinating story of back-and-forth discovery, insight, and co-application between the world of the designed and the world of the evolved.
He tells the story of how, in the 1980s, Richard Sutton of UMass Amherst tackled a problem in computer science from a biological angle. He wanted to take ways used by animals to solve the problem of knowing which action was best (called the credit assignment problem) and apply them to AI, in this case, simple game play. He had a hunch that animals solved this problem using expectation: in other words, decisions reinforced using predicted rewards rather than actual rewards. Bennett explains how Sutton decomposed reinforcement learning into two components. The critic predicts the likelihood of winning while the actor chooses an action and gets rewarded whenever the critic thinks the actor’s chance of winning increases, rather than getting a reward at the end of the game. The actor learns based on the temporal difference in the predicted reward from one moment to the next.
A savvy reader realizes this logic is circular: the actor depends on the critic which depends on the actor. But here’s the crazy thing. Sutton found that by training each simultaneously, “a magical bootstrapping occurs between them.” Over enough time and enough games the system makes intelligent decisions. Fast forward a few years and one of his students, Peter Dayan, found the connection between the AI and the brain: dopamine. In a long-story-short, dopamine is the way biology tells us that things are going better than expected and, in so doing, repurposes a “fuzzy average” of good choices into “an ever fluctuating, precisely measured, and meticulously computed predicted-future-reward signal.”
The cross-transfer of ideas between the artificial and biological realms often remains more metaphorical than practical. While examples like reinforcement learning show productive parallels, they are exceptions rather than the norm. Typically, we find inspiration from one field to another, rather than directly transferable concepts or algorithms. Such inspiration is valuable for framing ideas but doesn't directly accelerate scientific progress. For instance, using metaphors to suggest AI needs a "world model" for true intelligence (like AGI) helps conceptually, but it doesn't provide concrete methods for designing or evaluating such models in AI.
This is where Bennett doesn’t quite stick the landing. Let’s go back the parable of the 747. If the philosophy is real, shouldn’t we have more details about exactly what we don’t have in AI that we need? Where are the key gaps to fill? Sure, we have some conceptual pointers (we need algorithms for curiosity, imagination, metacognition, for example) but do we have novel tests or goals, informed by evolution but not dictated by it? Can we construct new benchmarks or tests for AI that align with his thesis? This gap might not be a flaw in Bennett's approach but a reflection of our preoccupation with mimicking human intelligence in AI, rather than how an artificial intelligence could be profoundly different.
This brings us to Breakthrough #6, the one yet to occur. Bennett says: “It seems increasingly likely that the sixth breakthrough will be the creation of artificial superintelligence; the emergence of our progeny in silicon.” Artfully he manages to make me somewhat excited about the idea this new species evolving, maybe because he differentiates his ideas from the more dangerous ideology of transhumanism. He stays firmly in the time scales of biological evolution even though, as he says, “Breakthrough #6 will be when biological intelligence unshackles itself from these biological limitations.”
How this happens is anyone’s guess. Evolution gives us hints at possible futures: life defies entropy, it represents information processing, and self-replicates. We know that evolution selects based on advantage and this is driven by a drive to survive not by any particular goal-seeking on the part of evolution itself. And we know that although evolution raises the probability of increasing complexity. But there are no guarantees that more complexity will arise. Nothing lasts forever, including Sapiens.
The question I’m left asking is, if AI is the next step in our evolution, why are we building it to be like us? New discoveries from complexity science have us thinking differently about the space of possible intelligences: "liquid" brains versus "solid" brains, for example, where we can think about intelligence as either lacking stable connections and static elements (such as ant colonies and immune systems) compared to our brains where neurons exist in well-defined and persistent architectures. A Brief History of Intelligence is an excellent primer for understanding what (in our the language of our Obsessions) we call Mirrors of Intelligence. The next iteration of the story of intelligence might be even more co-evolutionary than anyone can foresee today.
If you read A Brief History of Intelligence and want more depth and detail, I'd recommend these books:
- Predictive processing in our brains: The Experience Machine, Andy Clark
- Crossovers between AI and biological intelligence: Natural General Intelligence: Chris Summerfield
- AI history that includes some good basics: The Road to Conscious Machines, Michael Wooldridge
- Evolution of decision making and agency: Free Agents, Kevin Mitchell and The Evolution of Agency, Michael Tomasello
- Evolution of consciousness: The Deep History of Ourselves, Joseph LeDoux
- Groups and social dynamics, cooperation: The Social Brain, Tracey Camilleri, Samantha Rockey, Robin Dunbar and The Social Instinct by Nichola Raihani
- The neuroscience of how we think as a collective: Joined-Up Thinking, Hannah Critchlow
- How biological intelligence is a better "learner" than AI: How We Learn, Stanislas Dehaene
- Neuroscience of human behavior: Behave, Robert Sapolsky
- Modern and efficient primer on neuroscience: 7 and a Half Lessons About the Brain, Lisa Feldman-Barrett
- How our knowledge is social and more on logical glitches and illusions: The Knowledge Illusion, Steven Sloman and Philip Fernbach