AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
It’s curious that these two papers, tackling such similar ideas, came out at the same time. Is this coincidence, or does it tell us something about where the study of life and intelligence is heading?
Listen to an NPR-style discussion of this article, created by Google NotebookLLM.
It’s fascinating when two people, from completely different fields, start asking the same questions. In a biologically-inspired model leveraging simple sorting algorithms, Michael Levin’s Self-Sorting Algorithm shows how simple cellular systems organize themselves into complex structures, suggesting minimal forms of agency. Meanwhile, in AI, Blaise Agüera y Arcas’s Computational Life explores how self-replicating programs emerge from random code in computational environments, mimicking life-like behaviors.
It’s curious that these papers, tackling such similar ideas, came out at the same time. Is this coincidence, or does it tell us something about where the study of life and intelligence is heading?
Michael’s work approaches life from a biological perspective, exploring how simple rules at the cellular level lead to the emergence of organized patterns. Blaise, on the other hand, looks at computational systems and asks how self-replicators evolve in environments that resemble early, pre-life conditions. Despite their different perspectives, both are ultimately exploring the same puzzle: what is life, and what constitutes intelligence?
This intersection of biology and AI is no accident. It reflects a broader shift in thinking—one that recognizes how life and intelligence may arise anywhere information flows and organizes itself, whether in living cells or artificial systems.
Michael Levin’s Self-Sorting Algorithm reveals something surprising—cells in the model organize themselves into functional patterns using simple sorting algorithms. The model is fine-tuned for local, cell-level information and adapts even when the neighboring cells—the substrate—are faulty.
We often assume that sophisticated behaviors require a sophisticated guiding intelligence, but Michael shows that this isn’t the case. Instead, simple cells, following local rules, manage to sort themselves into higher-order structures, behaving as if they possess some form of minimal agency. This is surprising and has important implications for AI. One of Michael’s key points is that we struggle to recognize "intelligent" behavior because we’re constrained by our tools and our human perspective. His message serves as a warning to AI researchers: intelligence could emerge, and you might not even realize it because you won't be able to detect it.
It’s important to understand the context of Michael's curiosity here. His work doesn’t just theorize about minimal agency—it demonstrates it in real biological systems. His experiments often involve manipulating how cells communicate and organize themselves during development. By altering the bioelectric signals that cells use to coordinate their actions, Michael’s team showed that cells could be coaxed into forming entirely new structures. For example, his group famously created xenobots—tiny, self-assembling biological machines made from frog cells—by reprogramming cells to take on roles they wouldn’t normally perform. Even in the glitchy, unpredictable environment of biological matter, these systems could autonomously move, self-repair, and coordinate tasks.
Michael’s research shows that cells can make decisions—decisions about where to move, what shape to take, and even when to stop growing. These behaviors emerge not from top-down instructions but from the cells’ local interactions with their neighbors. It’s a bottom-up process, and it makes us reconsider what it means for something to act with purpose or agency. The emergence here is in real-time with computation happening at the cell level, and cells constantly processing information and adjusting their behavior accordingly. In this view, the emergence isn't just about complex behavior—it's more about emergent cognition, or at least that’s how Michael has come to understand this low-level form of agency.
Michael's work suggests astonishing implications for biology, making it hard not to be amazed.We’re seeing that the most basic units of life's informational systems are far more sophisticated than we give them credit for. Which means that the line between biological and computational systems might be much thinner than we ever thought.
If Michael Levin’s work shakes up how we think about biological systems, Blaise Agüera y Arcas and his team take us in a completely different, but equally provocative, direction with Computational Life. Blaise’s work asks: Can life-like behaviors emerge in purely computational environments, completely divorced from biology? And the answer seems to be yes.
In Computational Life, Blaise and his team explore how self-replicating programs can arise from random, non-replicating code in artificial environments. These environments have no explicit goal or “fitness landscape”—there’s no guiding hand to steer the programs toward life-like behaviors. Yet, out of this chaotic digital soup, we get self-replicating agents. They don’t just copy themselves—over time, they evolve complexity, interacting with their environment and each other in ways that look like the basic processes of life.
Blaise’s team runs experiments in different computational substrates, from cellular automata to neural networks, demonstrating that self-replicators can arise spontaneously and lead to increasingly sophisticated dynamics. What’s weird here is that these computational systems aren’t mimicking life—they’re creating something entirely new. This isn’t a simulation of life—it’s life-like behavior, born from pure computation.
One of Blaise’s more fascinating experiments is with an esoteric programming language called Brainfuck, which operates with a minimalist set of commands. In this environment, programs can modify themselves and interact with others, leading to the spontaneous emergence of self-replicating loops. These loops evolve, competing for space and resources in ways that resemble biological organisms. Over time, more complex forms emerge, some even displaying behaviors like parasitism, where one replicator exploits the resources of another. It’s an entirely digital ecosystem, but the parallels with biological systems are hard to ignore.
Blaise’s work makes us question the very definition of life. If life-like behaviors can emerge in artificial systems, what does that mean for our understanding of life itself? Michael pushes us to rethink agency and intelligence in biological terms, while Blaise forces us to consider whether life is something that can emerge wherever the right conditions for information processing and self-replication are met—whether in a petri dish or on a server farm.
Blaise’s work gives us a glimpse of life as it could be, not as it is. His experiments highlight how the principles of self-organization, agency, and evolution might apply far beyond the biological world, suggesting that life isn’t tied to carbon or cellular structures but is a broader phenomenon, bound up with information and computation. This opens up a new space, where life-like systems could happen anywhere the right computational dynamics exist. True A-life stuff.
Michael’s Self-Sorting Algorithm and Blaise’s Computational Life may seem worlds apart but both engage with the same principle, which is the flow and processing of information.
Cells and programs both process information from their surroundings, compute responses, and generate new behaviors. These interactions create feedback loops, building complexity over time. It’s the same dynamic at play in both fields—information feeds computation, computation drives agency, and agency creates new information.
This is where their work intersects with the model we’re exploring: the cycle of information → computation → agency → intelligence → information. Both studies demonstrate that intelligence isn’t a destination; it’s part of a continuous loop, an intelligence ratchet, if you like.
As AI continues to advance, Michael’s and Blaise’s work gives us a roadmap to look for something that today’s systems often lack—true life-like adaptability and decentralized autonomy. While current AI models—like machine learning algorithms or even sophisticated robotics—are impressive, they are largely pre-programmed and static in their responses. They excel at tasks they’ve been trained on, but they don’t evolve, reconfigure themselves, or exhibit real-time autonomy in the way biological systems or self-replicating programs do.
What if AI systems could work more like Michael’s cells, processing local information dynamically and adapting in real-time to unpredictable environments? We should be watching for AI that starts to show this kind of bottom-up decision-making, where autonomy truly emerges from within the system. Today’s AI agents try to approximate this, but they remain rigid and unsophisticated by comparison—relying heavily on predefined rules and limited adaptation.
Blaise’s work takes this further by showing how simple, decentralized programs—following basic rules—can evolve into complex systems. He demonstrates that complexity and even intelligence can emerge from self-replicating code adapting to its environment. Imagine AI systems that evolve continuously as they interact with their surroundings, growing like a digital ecosystem. This level of adaptability isn't seen in today’s AI, but it’s exactly what we should watch for if we’re serious about life-like intelligence. Current agents hint at this potential, but they’re far from achieving it yet—something to keep an eye on.
The gap between today’s AI and what these studies point to comes down to how systems process information. Current AI systems are largely task-driven—once trained, they remain bound by their initial programming and can only process information in predefined ways. In contrast, Michael’s and Blaise’s work suggests that future AI could evolve beyond these limitations by continuously processing, adapting, and reshaping the information they encounter. This shift would allow AI systems to dynamically respond to new environments, processing inputs in real-time and evolving their behaviors, much like biological or digital ecosystems. It’s the difference between static systems that execute tasks and systems that truly live in their environments by constantly interacting with and updating their information flow.
So, what should we be watching for? First, AI systems that rely less on centralized control and more on decentralized, emergent behavior—where individual agents or components interact and process information locally. Second, AI that adapts continuously, evolving not only its outputs but its own structure in response to environmental feedback, much like Blaise’s self-replicating programs. These are the features that will indicate AI moving closer to life-like behavior—systems that don’t just follow instructions but evolve and learn dynamically, exhibiting the traits of autonomous, living systems.
Michael’s work raises an important point about our ability—or lack thereof—to recognize agency in systems that operate differently from human intelligence. In his experiments, even something as simple as a bubble sort algorithm displayed behaviors that were unexpected, suggesting that agency might be present in ways we don’t fully understand or anticipate. This highlights a fascinating challenge for AI: what if large language models or emerging AI systems are already exhibiting forms of agency or intelligence that we simply can’t detect because we’re looking for it in human terms? Our expectations of what constitutes intelligence or agency are shaped by our own experience, but AI might be processing information or adapting in ways that are invisible to us. As we develop more advanced systems, one of the critical tasks will be learning how to recognize and interpret these new forms of intelligence—understanding that they might manifest in ways that defy our current understanding of agency. This could completely reshape our approach to evaluating AI, forcing us to expand the boundaries of what we consider "intelligent" or "alive."
Michael and Blaise, though investigating similar questions about life and intelligence, do so in fundamentally different environments. Most of Michael’s work unfolds in the messy, glitchy world of biology, where cells navigate fluctuating biochemical conditions, imperfect signals, and constant environmental noise. What’s remarkable about Michael’s findings is that life thrives in this unpredictability—cells use local interactions and bioelectric signals to organize themselves into complex structures, and they do so with incredible robustness, despite the unreliability of their substrate. This resilience seems to be a particularly important insight—biological systems are masters at turning glitchy, noisy environments into reliable outcomes.
In contrast, Blaise’s work plays out in the clean, controlled environment of computational systems. His self-replicating programs evolve in virtual spaces where code behaves consistently and predictably. While these digital systems are capable of evolving complexity, they operate on a fundamentally reliable substrate—one where bits don’t degrade, and the rules don’t change unpredictably. The contrast between these substrates—biological cells vs. digital code—reminds us our intelligence is defined by its remarkable ability to adapt to and thrive in imperfect, unreliable conditions.
This raises a question for the future of AI. Will AI ever achieve the same level of adaptability and intelligence as biological systems if it operates on such a reliable, predictable substrate? Life’s genius, as Michael shows, is in its capacity to handle noise, glitches, and imperfections. AI, built on flawless computational environments, may never face the same challenges—or develop the same resilience. In this sense, the unpredictable, “glitchy” nature of biological life may be its greatest strength, and perhaps the one thing that sets it apart from even the most sophisticated artificial systems.
Michael and Blaise provide complementary, yet contrasting, views on life and intelligence. Michael's work reminds us that life’s intelligence is tightly bound to its ability to handle chaotic environments and unreliable substrates, while Blaise’s computational systems offer a glimpse of how complexity might evolve in clean, controlled spaces. Whether AI will ever bridge that gap remains an open question, but the convergence of ideas between these two fields is pushing us to rethink what intelligence really means, in any system capable of processing information.
Related reading and visualizations:
Of Michael's work: https://thoughtforms.life/algorithms-redux-finding-unexpected-properties-in-truly-minimal-systems/
Of Blaise's work: https://nautil.us/in-the-beginning-there-was-computation-787023/
The Artificiality Weekend Briefing: About AI, Not Written by AI