David Deutsch, the reclusive physicist and philosopher, suggests that the key to human progress lies in our unique ability to create explanatory knowledge—theories that not only describe the world but open new avenues for discovery. In Deutsch's view, intelligence, whether human or artificial, is about participating in an endless frontier of explanation and innovation.
Deutch's book, The Beginning of Infinity, has influenced our understanding of AI, particularly in conceptualizing AGI. His work provides a clear framework for thinking about intelligence, emphasizing that it isn't tied to IQ or mechanistic benchmarks, but to the more elusive qualities we deeply value, such as creativity. As he articulated in 2011 (and is still true today):
The field of artificial (general) intelligence has made no progress because there is an unsolved philosophical problem at its heart: we do not understand how creativity works. Once that has been solved, programming will not be difficult.
For general intelligence, AI has to transcend computation and pattern recognition and develop the capacity for open-ended self-improvement, being unbounded as Deutsch sees in human thought. This means AI systems must be able to generate novel ideas, critically evaluate their own knowledge, identify gaps in their understanding, and autonomously seek out new information to fill those gaps. They should be capable of reformulating problems, making unexpected connections between disparate fields, and even questioning their own fundamental assumptions. They must be able to set their own goals for improvement, driven by an intrinsic curiosity about the world rather than externally imposed objectives.
To match Deutsch's conception of intelligence, AI needs to become not just a problem-solver, but a problem-finder and a creator of new knowledge, where it expands its own horizons in ways that even its creators might not have anticipated—a concept called open-endedness, and it is gaining traction in the AI research community.
Open-endedness is the capacity for continual innovation and learning, a trait that has long defined human intelligence but as yet doesn’t exist in AI. A new paper from Google DeepMind caught my eye because it aims to create a formal definition for open-endedness in AI, why it is required for Artificial Superhuman Intelligence, and outlines how it might be designed.
Before we get to the research, take a moment to consider the trajectory of human knowledge. This is important as it shows why we are different from other species on the planet and why the very idea of artificial superintelligence is so profound. A uniquely human capability is generating novel ideas and building upon existing knowledge. This isn't just about accumulating facts—it's about creating entirely new categories of understanding, often in ways that would have been impossible for our ancestors to predict. Many technological innovations aren’t merely an extension of existing knowledge—they are transformations that create new possibilities, new questions, and new fields of study.
This capacity for open-ended discovery is what sets human intelligence apart. We don't simply optimize within known parameters—we redefine the parameters themselves. We ask questions that lead to more questions, in an ever-expanding frontier of knowledge. This open-endedness is what allows us to thrive in a world of uncertainty and change. It’s made us into the most invasive species on the planet. We don't need to be programmed for every possible scenario (or selected for a particular habitat) because we have the ability to generate novel solutions to novel problems.
AI does not have this ability, and this is where the DeepMind researchers focus their attention. They ask: What is open-endedness in AI, and how can we achieve it? Building on an understanding of human open-endedness, the researchers propose a formal definition of it in artificial systems.
Their definition contains two critical components: learning and novelty. An open-ended system must produce artifacts—be they strategies, solutions, or creations—that are both learnable and novel from the perspective of an observer. The idea is formalized using the language of statistical learning theory, providing a mathematical framework for measuring open-endedness.
Here, 'learnable' means that as the observer sees more of the system's outputs, they become better at predicting future outputs. This is similar to how a scientist becomes more adept at understanding a phenomenon the more they study it. The 'novelty' aspect ensures that the system continues to surprise the observer, producing outputs that are increasingly difficult to predict based on what has come before. Think of open-ended AI like a jazz musician improvising. The 'learnability' aspect is equivalent to the musician's knowledge of scales, chord progressions, and musical theory. This foundation allows listeners (the observers) to follow and appreciate the music. The 'novelty' is like the unexpected riffs, the surprising chord changes, the moments that make listeners sit up and take notice. A great jazz improvisation balances familiar elements with surprising innovations.
This balance between learnability and novelty is crucial. A system that is purely novel might produce random, incomprehensible noise—interesting, perhaps, but not useful. Conversely, a system that is entirely learnable would eventually become predictable and cease to innovate.
The researchers suggest that open-endedness will involve combining open-ended algorithms with foundation models—to leverage the vast knowledge encoded in foundation models as a springboard for open-ended discovery. The key is to pair them with algorithms specifically designed to explore and innovate.
For instance, the researchers discuss the potential of evolutionary algorithms which mimic the process of natural selection to generate and refine solutions. When combined with the knowledge embedded in foundation models, these algorithms could potentially generate ideas that are both grounded in existing knowledge and genuinely novel.
But how close are we to achieving this vision of open-ended AI? On one hand, we've seen demonstrations of AI systems that exhibit some aspects of open-endedness. AlphaGo, for instance, surprised human experts with novel strategies in the game of Go. We've also seen language models that can generate creative text and solve problems in unexpected ways.
A unique aspect of the researchers' definition is its reliance on an observer. The open-endedness of a system is judged not in absolute terms, but from the perspective of an observer who is trying to predict and learn from the system's outputs. This observer-dependent nature of open-endedness is important because it departs from “regular” science and many AI approaches, we often strive for objective measures of performance. This view recognizes that what constitutes novelty or learnability can vary based on the observer's knowledge and capabilities—what's groundbreaking to one person might be trivial to another with different expertise. What counts as novel or learnable may vary depending on who's watching.
The observer-dependent definition poses a challenge to how we evaluate AI systems. Instead of fixed benchmarks, it suggests we might need more dynamic, context-sensitive ways of assessing AI capabilities. The goal would be for an observer-dependent approach to capture real-world complexity, which more objective measures might miss.
All current AI systems fall short of true open-endedness. We're still grappling with fundamental challenges, such as how to create AI systems that can set their own goals, how to ensure that novelty is meaningful rather than random, and how to maintain coherence and stability in a system that's constantly evolving. Existing open-ended algorithms often struggle to maintain continuous innovation, frequently reaching plateaus in their capabilities. This is particularly evident in their difficulty with transfer learning—the ability to apply knowledge gained in one domain to novel situations, a key aspect of human-like intelligence.
The practical challenges of implementing truly open-ended AI systems are formidable. The computational resources required for such systems are likely to be enormous. The data needs for training these systems to operate across diverse domains are equally daunting. Another crucial consideration is the ability of open-ended systems to handle multiple modalities of information simultaneously. Human intelligence seamlessly integrates visual, auditory, tactile, and other forms of information. For AI to achieve similar levels of open-ended discovery, it will need to develop comparable multi-modal capabilities. We are still very early in multi-modal models, let alone in multi-modal algorithmic tasks.
Long-term stability is another critical issue. As AI systems continue to learn and evolve over extended periods, maintaining coherence and stability becomes increasingly difficult. How can we ensure that an open-ended system remains consistent and reliable while continually generating novel outputs? One can imagine how an AI could easily get stuck in a local minima or alternatively head off into the wilderness alone.
Just as Deutsch sees human knowledge as potentially infinite, limited only by the laws of physics, these AI researchers envision artificial systems that will create infinite exploration and innovation. It says that AI is ultimately about going far beyond what biological brains have achieved.
For those of you interested in how this paper on open-endedness relates to Leslie Valiant’s Probably Approximately Correct (PAC) learning theory and “educability” here’s a quick overview.
Both frameworks are fundamentally concerned with learnability, but they approach it from markedly different angles. PAC learning provides a rigorous mathematical framework for understanding when and how a system can learn to approximate a target function with high probability. It offers precise guarantees about learning performance given certain conditions, and explicitly considers computational and sample complexity. The open-endedness framework, on the other hand, is more focused on the ongoing process of generating novel and learnable artifacts. While it provides a mathematical definition, it doesn't offer the same type of provable guarantees that PAC learning does.
This difference reflects a fundamental distinction in goals. PAC learning aims at convergence—approximating a fixed target function within specified error bounds. Open-endedness, by contrast, prizes continuous innovation and surprise. Where PAC learning might consider its job done when it has learned to approximate a function sufficiently well, an open-ended system would ideally continue to generate novel outputs indefinitely. This makes the open-endedness framework potentially more aligned with the kind of open-ended discovery we associate with human intelligence, but also more challenging to analyze rigorously.
PAC learning has been extensively applied to analyze specific learning algorithms, while the open-endedness framework as presented is more abstract. It doesn't yet offer the same level of insight into the behavior of particular AI systems or learning approaches. This suggests an important direction for future work: developing the open-endedness framework to provide more concrete guidance for AI design and analysis, perhaps by incorporating some of the rigorous analytical tools developed in PAC learning theory.