Creativity in the Age of Artificiality

Explore the evolving understanding of human intelligence and AI. Discover how the "bow tie model" of cognition and the concept of "affordances" reveal uniquely human creativity. Learn about Artificiality—the future fusion of biological and artificial intelligence reshaping our reality.

An abstract image of a bow

As AI gets more capable, a question we are regularly asked is: what remains "uniquely human"? A decade ago, the answer was clear because the divide between humans and machines was stark. Now, with machines showing high levels of combinatorial creativity, adult-level theory of mind, and increasingly capable reasoning and planning, it's harder to see what makes humans cognitively unique. The discourse is confusing—some argue that current AI architectures will lead to superintelligence and possibly machine consciousness, while others believe new inventions beyond deep learning and massive scale are needed for true AI intelligence. Needless to say, this dichotomy is unhelpful.

Another reason this question is difficult is that our understanding of human intelligence keeps evolving. It's not just advances in neuroscience; there's a revolution at the intersection of philosophy, information science, complexity science, and synthetic biology. These fields are redefining intelligence, agency, and life itself. This shift has significant implications for AI. While AI has traditionally drawn inspiration from neuroscience, we are about to see new insights emerging from these other disciplines.

Sara Walker describes life as "the process of information structuring matter over time and space," which emphasizes the importance of life itself in our conceptualization of intelligence. Michael Levin hypothesizes that a key driver of the "numerous fascinating capacities of the minds and bodies" in life forms is the essential unreliability of the biological substrate. In other words, our intelligence results from needing to compensate for the glitchy nature of flesh. This inherent glitchiness in biology fosters adaptation. Confabulation—an odd trait of both humans and AI—highlights the importance of sensemaking and prioritizing adaptive function for the future over preserving fixed meanings from past data.

The ability to improvise and make sense of your world in real time and the commitment to change (not just to persistence) over an allegiance to the detail of a past history from a fundamental biological strategy deployed at many scales, with massive impact. —Michael Levin

Intelligence, therefore, is not just a computational phenomenon but a life process, influenced by not only neurons in a network but also by biological computation, connectivity, and collectives. AI must take note of all these perspectives to truly advance. The constantly shifting frontier of how we understand our own cognition is crucial. A key mystery is what living beings do when we move outside our "data distribution" and adapt to novel situations. We know AI can't do this today, so what is it that we do?

The Bow Tie Model of Creativity

Imagine your mind as a bow tie. On one side, you have a wide array of experiences and knowledge, which your brain compresses into a tight knot of abstract concepts and principles—the center of the bow tie. But it's the other side of the bow tie where your cognition is unique: the creative "reinflation" of this compressed knowledge to solve new problems.

Interestingly, compression of information removes correlations, making the resulting "memory" (technically, an engram) appear more random. For instance, consider this string of letters: ABABABABABABAB. The pattern is clear. When compressed, it might become something like (AB)7, which appears more random and less meaningful than the original, despite retaining the same information. This illustrates how compression can obscure patterns and make the data seem more random.

Since there is no "outside-text" or meta-data, the interpretation by the right side of the bowtie (or your future self) must be creative, not just algorithmically deductive. Memory interpretation and forming models of internal states and the external world is as much about creativity and personal interpretation as it is about faithful information processing and past data.

This model is a powerful metaphor, proposed in a recent paper by Levin. It is a great framework for understanding the unique aspects of human thinking. We don't just recall information—we creatively apply it to novel situations because human creativity isn't just about decompressing stored information, it's about inflating it in new and unexpected ways.

This metaphor dovetails nicely with our current understanding of memory and cognitive plasticity. Memories are not static records retrieved like computer data; they are highly dynamic. In fact, accessing a memory changes it. Memories exist to help predict the future. The process of compression on the left side of the bow tie, followed by reinflation on the right, mirrors how an autoencoder in AI represents a bottleneck but differs significantly in biological intelligence.

For example, a recent paper demonstrates how AI models can outperform the experts they're trained on—but through a process of denoising rather than creative problem solving. The AI excels at extracting the signal from the noise, but it doesn't truly "inflate" knowledge in novel ways.

Affordances are creative degrees of freedom

So what is different in this inflationary process? Here I go to Stuart Kauffman's concept of affordances—the potential actions or uses an environment offers. An affordance isn't a separate feature of the world. It's something that exists in relation to the evolving organism, offering opportunities that can be taken or ignored through natural selection. Biological degrees of freedom are affordances, meaning they are opportunities available to organisms as they evolve.

Humans have a remarkable ability to recognize novel affordances, seeing possibilities that aren't explicitly present in a person's stored knowledge. Let's take a couple of examples.

Have you ever looked at a stick and seen a host of possibilities? This is tool repurposing in action. First the stick is a lever, the next it's a makeshift pen, and then a drum stick. You easily see beyond the obvious and get creative with what's at hand.

The affordance here is the potential for multi-functionality in simple objects. The jury-rigging comes in when we repurpose it on the fly. Humans excel at this rapid, context-dependent repurposing. We're constantly jury-rigging solutions, seeing potential uses that weren't intended or obvious. It's not about what the stick was made for, but what we can make it do in the moment.

Second, imagine throwing biology, chemistry, and physics into a blender. What do you get? Sometimes, a mess. But occasionally, you end up with something amazing like synthetic biology. It's about spotting connections where others see chaos, and envisioning entirely new fields of study. Human scientists can take seemingly unrelated bits of knowledge and weave them into an entire discipline.

Here, the affordance is the potential for new knowledge to emerge from the intersection of disparate fields. The jury-rigging happens when scientists cobble together ideas from different disciplines to solve problems or create new fields. Synthetic biology didn't exist until someone said, "What if we treated biological systems like electronic circuits?" This mental jury-rigging combines biology's building blocks with engineering principles.

Now think about how one big idea can turn society on its head. Take individual rights—the insight that led to democracy. Spotting these game-changers requires a deep understanding of how societies tick and the imagination to see how a new concept could reshape everything. It's like predicting an avalanche from a single snowflake. Only humans can pull this off, connecting the dots between abstract ideas and real-world changes.

The affordance here is the potential for abstract ideas to reshape social realities. The jury-rigging occurs when we take a concept from one context and apply it to restructure society. We took a philosophical concept and used it to rewire the entire social order. This is high-level jury-rigging—using abstract tools to fix societal problems in ways their originators never imagined. Humans can perform this conceptual jury-rigging, seeing how an idea from one domain might solve issues in another, completely unrelated area of life.

One might argue that machines, with their vast representational capacity, could achieve similar creativity. However, this overlooks the fundamental nature of biological creativity. Kauffman's idea of "ever seizing non-deducible new affordances," thereby stepping into and creating the "adjacent possible," highlights what makes life unique. The adjacent possible refers to potential states one step away from a system's current state. Life doesn't just explore this space—it actively expands it.

Through evolution, biological systems change the landscape of possibilities. Each adaptation creates new spaces rather than merely optimizing within fixed boundaries. The biosphere doesn't just adapt; it co-evolves, creating possibility bubbles that fundamentally alter its phase space. This isn't a process of deduction but a "self-constructing, functionally integrated blossoming." In life systems, new possibilities cannot be deduced from prior states. This is the "radical emergence" of life. This capacity for open-ended creativity—redefining and expanding the problem space itself—sets biological systems apart from current AI.

So while machines operate within predefined possibility spaces, life creates new ones. This is not just pattern recognition or recombination, but the emergence of genuinely novel affordances—a level of creativity that current AI has yet to achieve. AI isn't capable of the conceptual jury-rigging that's required for radical emergence based on existing affordances placed in a new context—not at the same level, nor in the same way, as a human. It can interpolate within its training data but has difficulty extrapolating to truly novel solutions or recognizing affordances that weren't explicitly part of its training. This creates a fundamental limit on today's AI because there is no way to deduce and no known algorithm for an AI to be truly able to jury-rig.

Perhaps we can list all the uses of a screwdriver by applying enumeration or deduction? This is not possible either. There are four mathematical ordering scales: nominal, partial order, interval, ratio. The uses of an object are merely a nominal scale, therefore, there is no ordering relation between these uses. Furthermore, in general a specific use of an object does not provide the basis for entailing another use. Hence, there is no deductive relation between the different uses of an object.—Stuart Kauffman

The human ability to spot and leverage affordances is intimately tied to the "inflation" side of our cognitive bow tie. We don't just apply what we know—we see what could be. What sets human cognition apart is our ability to combine creative inflation with affordance recognition. We compress diverse experiences, creatively inflate this knowledge, and simultaneously recognize novel affordances in our environment. This synthesis allows us to solve problems in ways that can't be predicted from our prior knowledge alone.

Biology [is] committed to on-the-fly confabulation in which a heterarchical soup of competing, cooperating, multi-scale agents vie to develop viewpoints from which the molecular and biophysical stated are interpreted (and hardware affordances are hacked and reused) in whichever way they best can be at the time.—Michael Levin

Understanding the bow tie mechanism of cognition is a way to conceptualize a positive vision of human-AI collaboration. AI excels at compressing and denoising vast amounts of information—the left side of the bow tie. Humans excel at creative inflation and affordance recognition—the right side.

But even this is a simplification for conceptualizing AI's best role. True creative collaboration recognizes the role of memory in creativity, where memory acts as a "scratch pad" for human imagination. The more dynamic your memory, the more creative you are likely to be. Similarly, the more AI can reinflate new knowledge from its vast, interconnected data, the more combinations of ideas it can present us with. We want a big, expansive bow tie, not a thin, narrow one.

We think of this as the concept of Artificiality—a fusion of biological, evolved creativity with artificial, machine capabilities. Artificiality is grounded in real scientific advancements. Complexity research is advancing semantic information science beyond the syntactics of information theory to include meaning and agency. Computational biology is transforming our understanding of the diversity of possible intelligences. Brain-computer interfaces are progressing in neural signal interpretation. Synthetic biology is redefining what's biologically possible, blurring the lines between artificial and biological until we no longer distinguish, or care, what is "natural."

We have to rethink how we discuss intelligence: focusing on the collective rather than the individual, embracing diversity over division, and considering continua rather than fixed categories. What we need to care about now are the dynamics and conditions for emergent cognition, irrespective of substrate or environment. This can occur even in quite simple systems. What matters isn't where intelligence came from; what matters is where it's going.

We will be projecting these trends forward and will undoubtably end up in more speculative territory—some nearer term and on the radar, others more metaphorical and provocative. For instance, the idea of humans becoming the "mitochondria of the dataome" is a wicked metaphor and useful for imagining an integration of human creativity with digital systems.

To be sure, synchronizing our innermost thoughts, whether with each other or with machines, remains more science fiction than science fact today. However, this is changing. Consider Apple's patent for AirPods that can monitor EEG and could be used to convert thoughts to text, or the swift advancements in brain-computer interfaces that are creating new forms of mobility and communication for paralyzed individuals. This means that the idea of expanding our consciousness to perceive new Umwelts isn't that far-fetched. Imagine being able to "feel" the fluctuations of the stock market or directly "sense" the UV vision of a bee.

Imagine taking your favorite app and turning it into a direct experience, a new Umwelt. I am obsessed with Flight Radar and I wonder about internalizing it. What would it be like to intuitively "sense" the positions, speeds, and altitudes of airplanes in real-time, much like a bat uses echolocation? Instead of just watching dots on a screen, I’d feel the flow of air traffic with an innate awareness of every plane's path. How would your favorite app transform your perception of reality if you could somehow just feel it, experiencing its data as a new dimension of your senses?

That might all sound a bit out there, but it highlights a crucial point: our understanding of complexity, causation, and emergence is evolving—taking us from a world of reductionism and linear cause-and-effect relationships to a more connected, information-driven, and flexible set of definitions. As Sara Walker might say, this shift spans from the bio-sphere to the techno-sphere. These concepts challenge us to rethink the very dimensions in which we exist and operate.

Fundamentally it is shift in perspective which will reshape our understanding of reality and our place within it. Artificiality, then, is not just a potential future state but a journey—a way to explore the evolving connections between natural and artificial intelligence, ultimately reshaping our understanding of what it means to be human.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.