The Artificiality describes the merging of synthetic systems with organic experience, where AI and other technologies reshape our reality. At the heart of this evolution is the dynamic progression from information to computation, computation to agency, agency to intelligence, and intelligence to consciousness. This flow illustrates how technology transforms from passive tools into active participants, challenging us to reimagine the future of human experience when machines have agency and even minds of their own.
This article is an exploration of The Artificiality. In it we examine the progression from information to consciousness in the age of synthetic systems, exploring how AI reshapes the foundations of reality. Information was once a static resource—cataloged, clarified, and mapped to reduce uncertainty. Computation transformed this static data into dynamic processes, while agency gave machines the capacity to act. Intelligence allowed systems to learn and adapt, but it is the boundary of consciousness (ours and perhaps eventually theirs)—where machines reveal emergent, often alien patterns. This forces a reconsideration of reality itself.
At each step of this flow, AI redefines human perception. It does not simply predict or optimize but participates in co-creating what we understand as real. Machines expose adjacencies—patterns and possibilities beyond our current comprehension—and challenge the fixedness of truth. This article invites you to grapple with how AI shifts human understanding of knowledge, meaning, and our role in a reality that is fluid, emergent, and constantly reinterpreted.
Why Read This?
- Trace the Flow: Follow the progression from information to computation to agency to intelligence to consciousness, and understand how this shapes our evolving reality.
- Explore the Boundaries: Investigate how machines blur the lines between perception, action, and meaning, challenging our assumptions about consciousness and reality.
- Engage the Big Questions: What does it mean when synthetic systems co-create reality? How do we balance machine clarity with human meaning in this hybrid future?
In the next 6,600 words (approximately 25 minutes of reading time), we aim to frame these questions not as abstract worries for a future time but as real-world challenges for anyone navigating the world AI is reshaping. As AI diffuses through everything, it will inevitability create winners and losers, lovers and haters, new states of belonging and unbelonging. Our purpose is to understand the deepest, most profound principles that underpin this change to our species.
The Artificiality is Not Transhumanism
Before we go on, a quick note. The Artificiality is not about abandoning humanity, transcending biology, or becoming a "new species." It is not a vision of the Singularity as imagined by Ray Kurzweil, nor a call to merge fully with machines and upload our brains to an AI in the cloud. Instead, The Artificiality explores the merging of the synthetic and organic in ways that are deeply rooted in our shared humanity.
We remain fundamentally humanist in our perspective. This means recognizing that while technology blurs the boundaries between the natural and the artificial, our moral reasoning, empathy, and collective choices are the anchor for this transition. The Artificiality is about adapting—not leaving behind what makes us human—to navigate a world where our technology has outpaced our capacity to solve collective challenges. This needs to change.
We think less about machines taking over or humans becoming irrelevant than we do about using the tools we’ve created to enhance our ability to respond to complex problems and shape a future where meaning, values, and choices remain central. The Artificiality acknowledges the fluidity between synthetic and organic but insists that humanity—our ethics, imagination, ingenuity, and capacity for change—remains at the core.
What is Intelligence?
Intelligence is one of humanity's oldest fascinations. It's fair to say we're obsessed with it. Whether it's the intricacies of ant colonies, the neural structures of the brain, and now in artificial systems, we seem desperate to lock it down in a simple definition. Is it the ability to solve puzzles? To plan for the future? To adapt to the unexpected? Or is it something deeper—perhaps a fundamental property of systems that evolve, compute, and interact with their environments?
To frame this article, we’ll start with an ambitious question: What is intelligence, really? And to ground it, we’ll lean on a principle increasingly seen as foundational—life (and by extension) intelligence, is a process that emerges from information and computation. Whether in biological or artificial systems, intelligence involves gathering, storing, and transforming information to navigate a space of possibilities. These spaces might be physical, conceptual, social, or computational, but the ability to adapt, optimize, and explore defines intelligence across these domains.
Our perspective is that intelligence isn’t just a human trait or an artifact of advanced computation but something that spans scales and substrates.
Our perspective is that intelligence isn’t just a human trait or an artifact of advanced computation but something that spans scales and substrates. So we see it in single-celled organisms navigating their environments, in the bioelectric networks that guide cellular repair, and in the artificial agents now reshaping our digital and physical worlds. It’s not limited to brains or silicon. It's as much about how systems use information to achieve goals as it is about the goals themselves.
Dr. Michael Levin’s work beautifully illustrates this idea, showing how intelligence operates not just in brains or silicon but in the fundamental ways systems organize and use information. Levin's work shows that if you cut a planarian worm in half, both halves regenerate into complete worms. This crazy feat doesn’t stem solely from DNA, which remains the same in each fragment, but from the bioelectric signals and information networks shared between cells. These signals act as a kind of blueprint, dynamically guiding the cells to rebuild the organism. This use of information to achieve a goal, like regeneration, is a form of intelligence. Cognition is emergent, and systems, even at their most basic levels, process information to create structure and function.
So we can now understand that intelligence spans scales and substrates, whether in the electrical fields of a worm or the algorithms of a machine. This reshapes how we understand both biology and technology as participants in a shared informational universe.
Intelligence has many definitions depending on your angle. For some, it is a measure of problem-solving ability, for others, a capacity for adaptation or creativity. For others, it may simply be an ineffable quality that distinguishes living from not.
But definitions matter, and as AI becomes more powerful and "intelligent" by human measures, words have become a contest for control. Silicon Valley and tech giants increasingly frame intelligence in ways that align with their algorithms, their benchmarks, and their markets. The definitions they champion—pattern recognition, optimization, problem-solving—are precise, measurable, and above all, machine-friendly. But they are also incomplete.
By framing intelligence around what machines excel at, these definitions subtly diminish human, and really all biological, intelligence. Ironically, they also diminish artificial intelligence itself because we constrain what AI could become, boxing it into benchmarks that reflect narrow human priorities rather than exploring its potential as a radically different kind of intelligence.
We run the risk of not only misrepresenting the richness of human and biological cognition but also stifling the creative evolution of AI systems that might otherwise expand the boundaries of what intelligence could mean. This reductionism limits our collective imagination, keeping both humans and machines tied to familiar, measurable goals rather than encouraging the exploration of new forms of thought and agency.
Of course, there's a more pressing and obvious downside with machine-friendly definitions of intelligence. AI systems never get tired and, in many cases, are more reliable decision-makers, while humans appear slow, error-prone, and suboptimal in comparison. The promise of the analytics movement is that there is a right answer in the data and only machines are capable of finding it. In pursuit of that "right answer," we’ve begun to judge ourselves by what machines are good at.
This risks reframing intelligence as a narrow pursuit of computational efficiency, stripping away the broader spectrum of what living beings care about—what gives life purpose and, quite literally, what makes life worth living. If we are going to talk about intelligence as computation, then we also have to broaden that view to include forms of reasoning, adaptation, and understanding that are rarely framed as information processing. These are the qualities unique to beings shaped by the pressures of biological evolution. That is, us.
Before we go on, let's take a moment to check in with the many ways that intelligence has been framed:
- Problem-Solving and Adaptation
A traditional view sees intelligence as the ability to solve problems and adapt to new circumstances. This perspective is most common in psychology and education, where tests like IQ scores quantify how individuals reason, learn, and solve puzzles. This is the primary lens that shapes artificial intelligence research, where systems are judged by their ability to optimize, predict, or solve within defined domains. - Information Processing and Computation
In information theory, intelligence is reframed as the ability to process, store, and act on information. Here, intelligence is no longer tied to biology. A thermostat, in its simplest sense, exhibits intelligence: it measures temperature and adjusts accordingly. Of course, this definition quickly becomes more interesting when scaled up—think of the processing power of modern AI systems capable of analyzing massive datasets or predicting molecular structures for drug discovery. - Adaptation Across Scales
For modern biologists and complexity theorists, intelligence operates across scales. Cells exhibit intelligence when they repair wounds or navigate developmental pathways during embryogenesis. This capacity for goal-directed adaptation is not unique to brains—it exists wherever systems process information to achieve desired states. A healing skin cell, a migrating neuron, or a regenerating frog limb all display forms of intelligence. - Emergence and Social Complexity
Sociologists and ecologists often emphasize collective intelligence. The coordinated behavior of ants or the flocking of birds demonstrates how simple rules can produce highly adaptive group behavior. Intelligence here is not in the individual but in the network, emerging from interactions and feedback loops. - Conscious Experience and Intent
For philosophers, intelligence is tied to consciousness—the ability to experience, to reflect, and to make intentional decisions. This view draws a line between systems that calculate and systems that feel, suggesting that true intelligence requires a subjective component. As philosopher Thomas Nagel famously asked, “What is it like to be a bat?”—a question about whether intelligence can exist without consciousness. - Creativity and Exploration
Another definition emphasizes intelligence as the ability to create: to explore beyond existing rules, to discover novel solutions, or to imagine futures that don’t yet exist. This is an intelligence that thrives in ambiguity, engaging with the unknown rather than merely solving for it. It's also the intelligence of biology, of exploring the "adjacent possible," as we will explore later in this article.
All of these definitions converge on a shared theme: intelligence involves navigating a space of possibilities. These spaces might be physical (a maze, a body, a battlefield), conceptual (a puzzle, a theory, an ethical dilemma), or social (a group, a culture, a market). Intelligence emerges from the interaction of systems with their environments, where information is used to adapt, predict, and act.
So here we have it—intelligence resists one simple definition. The better definition is the more complex one—it’s the interactions between systems and their environments, where information processing creates meaning. With an artificiality where information and computation spans new spaces, the question is no longer just “What is intelligence?” but “What might intelligence become?”
Next we’ll go into more depth about how information is a common basis and explore how life’s open-ended nature can help us understand how intelligence is as much about creating new possibilities as it is about navigating the present.
Humans are obsessed with intelligence yet it defies a simple definition. Instead we need to think about it more as an emergent property—somehow linking information and computation with achieving goals in the world. The prospect of artificial intelligence exposes how intelligence isn't limited to biology, much less humans. Are we really that special?
Intelligent systems are those that actively engage with their environment to extract meaning from complexity and then subsequently make decisions to guide action. You can think of it this way: intelligence emerges wherever information processing meets purpose.
Here's where we need to shift our focus—from philosophical debates about "what counts as intelligent" to understanding the universal mechanisms that enable systems to use information and computation to establish meaning and thereby navigate uncertainty. In biology, intelligence serves survival and adaptation through natural selection over time. In today’s AI, intelligence supports pattern discovery and prediction, often as part of a broader problem-solving process involving life and, therefore, us humans.
"Life is starting to look a lot less like an outcome of chemistry and physics, and more like a computational process"
—David Krakauer and Chris Kempes
There's a fundamental idea here—intelligence connects life and technology. We are uncovering the fundamental patterns that allow any system—organic or synthetic—to set and achieve goals in dynamic environments. But how do joint goals emerge? Or singular ones? If intelligence is neither uniquely biological nor fully artificial, but an emergent property of computational systems that model and respond to their world, we have to ask: where do we end and technology begins? And as traditional boundaries erode, whose goals truly drive the system—ours, AI's, or something new forged in the interaction?
Before we can tackle such a profound philosophical and scientific question, we first need to understand how life itself uses information to navigate the fundamental challenge of surviving in an uncertain and complex world.
The challenge of existence is managing uncertainty. Every biological system operates as a sophisticated information processing network that continuously refines its predictive models. Bacteria looking for food might move in a simple way but this is actually an elegant computational strategy. Chemical gradients serve as input data, cellular machinery processes this information, and flagellar motion represents output. A germ is a complete predictive system, adapted to the uncertainties of its environment, operating at the microscopic scale.
There's a big gap between biological and artificial systems. Biological systems have evolved over millions of years and have fine-tuned a multi-layered approach to managing uncertainty. Biological intelligence is a masterclass in computation: parallel processing, energy efficiency, feedback, and adaptability. Neurons transmit electrochemical signals while dynamically rewiring themselves in response to experience. This is computation optimized for survival. Exploring biological principles—feedback, redundancy, and distributed control—offers valuable inspiration for designing AI architectures that are more dynamic and responsive. These principles are deeply tied to agency and goal formation, as they enable systems to not only adapt to their environment but also shape their objectives in response to changing conditions. It is this dynamic, non-equilibrium aspect that bridges the gap between what's "programmed" and how we think about intelligence.
It's likely that effective intelligence requires dynamic integration with real-world complexity. AI isn't yet truly agentic—it has not mastered the depth of adaptive prediction available to life. Developing real agency in AI—like adapting to changing situations and handling uncertainty to achieve goals—is a key focus of research. To get there, it’s important to understand how information leads to agency, and how agency builds intelligence.
Before going any further, it's useful to understand what information is. Think of it not as data points or facts, but as nature's way of creating order from chaos. When you reduce uncertainty about something—whether it's finding your keys or one of your cells detecting nutrients—you're working with information.
Imagine flipping a coin. Before it lands, there's uncertainty—it could be heads or tails. Once it lands, that uncertainty vanishes. The reduction of uncertainty is information in action. In this case, one bit of information—the amount needed to resolve a choice between two equally likely outcomes. This same principle operates everywhere: in DNA guiding cell development, neurons firing in your brain, or AI systems recognizing patterns.
Entropy enters the picture as a measure of uncertainty or disorder. Systems naturally tend toward disorder (high entropy), but life creates local pockets of order by processing information. Your body maintains its temperature despite changing conditions, and your brain makes sense of visual data despite noisy input. These are both examples of using information to combat entropy.
Understanding information this way reveals why it's fundamental to both biological and artificial intelligence. While syntactic information reduces uncertainty, semantic information connects data to purpose and meaning. For biological systems, meaning emerges as survival strategies—finding food, avoiding predators, or reproducing. For AI, this connection is less intuitive but equally critical: how does a system determine what information is relevant and how to act on it? Bridging this gap is key to developing truly adaptive, agentic systems.
What sets biological intelligence apart from AI is its fundamentally different approach to processing environmental complexity. Living systems operate through continuous dynamic equilibrium—a state of perpetual refinement where internal models update in real-time. They don’t wait for a perfect dataset or rely on fixed rules. Your body responding to a sudden fever isn’t following a preset script. Instead your immune system uses feedback loops to adapt moment by moment, staying functional even when things change unpredictably.
AI works differently. Neural networks and transformers can process massive amounts of data and predict patterns, but they don’t adapt in real time. Once trained, their ability to handle new situations is limited. They recognize what they’ve seen before and excel within predefined boundaries, but they don’t engage with the messiness of the real world the way your body does.
If artificial intelligence is going to operate effectively in complex environments, the best proxy we have is to learn from biology. Advancing AI requires more than incremental improvements to existing architectures. We need systems that can continuously refine their internal models while maintaining operational stability. This kind of thinking moves us beyond technical upgrades to focus on the difference between syntactic information—data that reduces uncertainty—and semantic information, which connects that data to meaning, reshaping how we understand agency, prediction, and the creative potential of computation.
For example, living systems don’t aim for perfect stability. Instead, they succeed by operating at the edge of chaos—a sweet spot between rigid order and total randomness. In a flock of birds, each bird follows simple rules, like keeping a certain distance from its neighbors, but together, the flock behaves like a single, fluid entity. This balance allows the group to respond quickly to predators or obstacles without falling apart.
AI systems need enough structure to stay functional but enough flexibility to innovate and adapt. Large Language Models tuned to operate in this balance—neither too rigid nor too random—are better at tackling more complex or creative problems. They reflect the resilience we see in nature, where adaptability emerges from systems balancing order and disorder.
This is a lot easier said than done. AI can't generalize or adapt on the fly. In organic systems, whether it’s the chemical signaling of liver cells, the bioelectric networks of planarian worms, or the real-time environmental cues that guide a flock of birds, intelligence emerges through iterative feedback loops. These loops allow systems to adjust their internal models based on new information in real-time, reducing prediction errors and improving adaptation. Bridging this gap is one of AI's biggest challenges.
As machines become more capable of exploring open-ended problems—designing novel molecules or creating art—they blur the boundaries between narrow intelligence and something more expansive. The key questions are: where do new goals originate, and what is agency when information and computation blur the line between the synthetic and the organic?
Next we’ll go look at how information is a common basis and explore how life’s open-ended nature can help us understand how intelligence is as much about creating new possibilities as it is about navigating the present.
Selfhood and Distributed Agency in a
Hybrid World
Information unites technology and life. The distinction between the organic world of evolved biological beings and the synthetic world of designed entities is fading—not only because machines are imitating humans or because we are so reliant on technology, but because intelligence transcends the gap. Whether in neurons or silicon, intelligence emerges wherever information is processed, predictions are made, and adaptation occurs. Think of intelligence not as a trait of humans or machines, but as a space where both collaborate, adapt, and transform.
Think of intelligence not as a trait of humans or machines, but as a space where both collaborate, adapt, and transform.
Hybrid systems, where AI and human cognition merge, are the clearest expression of this continuum. Neural interfaces already allow paralyzed individuals to control robotic limbs, and over time, the brain rewires itself to treat these synthetic devices as extensions of the body, a shift in the sense of self. Identity becomes more fluid.
It's always been a challenge to draw the line between where we end and technology begins. Tools have always extended our abilities—externalizing memory, amplifying reasoning, or enhancing perception—but they remained static, under our control.
With AI, this relationship enters a new kind of flux. AI systems adapt, learn, and respond in ways that interact with our own thinking, creating a feedback loop that reshapes how we process the world and define ourselves within it. The self is no longer anchored solely within the mind or body but distributed across systems that influence our choices, goals, and sense of agency. This represents a major shift in the boundaries of cognition and identity—making the line between "us" and "it" increasingly difficult to draw.
The same principle we see in physical systems applies to cognitive systems. Language models are not just tools for generating text because they reshape how humans think and create. When you use AI to brainstorm ideas you aren't just delegating creativity but engaging in a feedback loop where the machine’s suggestions provoke new insights. Over time, your thinking adapts to the AI’s capabilities, and the AI, in turn, refines itself based on your input. The result is neither purely human nor purely synthetic—it’s hybrid creativity, an emergent phenomenon that reflects the continuum of intelligence. Using ChatGPT today might not feel like this but it’s coming as more humans learn to co-write (and so co-think) using AI.
Distributed agency is forced upon us at all levels—not just in a philosophical sense but in very practical terms. We can see just how fluid—and poorly understood—agency can be. Take the case of driverless cars, where there are difficult lines to draw around the boundaries of responsibility when things go wrong. When decision-making is shared, it raises hard questions about where goals come from and how they evolve. As systems begin to adapt, learn, and influence outcomes in ways no single human designed or predicted, we edge closer to the idea of open-ended agency—systems capable of defining their own objectives within ever-changing contexts. Unlike tools that follow predefined rules, these systems act more like organisms, responding to and reshaping their environments.
Unlike tools that follow predefined rules, these systems act more like organisms, responding to and reshaping their environments.
A real problem is that we lack a robust science of where and how novel goals originate. Emergent complexity—already challenging to study—becomes exponentially harder when it evolves into emergent goals. What competencies allow agents to define and pursue new goals? Where do these capabilities arise, and how can we understand them? It's an oxymoron to "design" such a truly agentic system. Such a system's behavior would be entirely contingent on its post-design experiences and learning.
And when we are rethinking agency, we have to rethink accountability. Back to the driverless car—agency does not belong to a single actor but is shared across programmers, environmental feedback, predictive models, and the passengers themselves. Decisions emerge not from a singular mind but from the convergence of human and machine processes.
By recognizing that information and computation are the binding forces, we can begin to understand agency not as the property of individuals—human or machine—but as something that emerges from the system as a whole, contingent on the systems history. Yet this shift brings practical challenges: if agency is distributed across a human-machine system, who owns the outcomes? Who determines what matters, what’s meaningful, or what’s valuable? As AI increasingly shapes our cognition, attention, and memory, it doesn’t just influence what we do—it reshapes who we are.
Perhaps the answers lie not in debating control but in rethinking the problem space of intelligence itself. Intelligence is often thought of as the ability to achieve goals within a defined space—but what if the space itself is dynamic and subject to transformation? In biology, transformations across "morphospace"—the landscape of possible forms and behaviors—are not hypothetical. They actually happen.
A favorite example is the metamorphosis of a caterpillar into a butterfly. The caterpillar dissolves its body and brain into an undifferentiated state, only to reorganize into a radically different creature. Its form, behavior, and environment shift entirely: from crawling on leaves in a 2D world to fluttering around in 3D space. Despite these profound changes, research suggests that some memories persist across these phases, serving as scaffolds for adaptation—a thread linking two vastly different embodiments of the same life.
This example illustrates a key insight: intelligence is not static, and agency is not confined to a single form. Instead, both are defined by their ability to adapt and transform to navigate new spaces.
What if AI could undergo similar transformations, reorganizing itself physically or computationally to explore new domains while retaining memory as a bridge across states? Such systems would not merely solve problems within predefined spaces but would redefine their operational boundaries, opening up new regions of morphospace to explore.
What matters in this scenario is not just who is in control but what kind of transformations are possible and how they enable continuity across phases. Memory becomes more than a static record—it is a dynamic tool that supports adaptation in unfamiliar environments. By focusing on these transitions, we can better understand how intelligence—whether organic, synthetic, or hybrid—thrives not by rigidly adhering to a single state but by embracing change and reorganization in an open-ended manner.
The ethical implications of creating open-ended agents are complex and in no way easy to anticipate. Should we aim to build AI that simply aligns with and fulfills our goals, or do we want truly autonomous agents with the same open-ended intelligence—and potentially moral worth—that we recognize in humans and animals? The former prioritizes utility, while the latter invites the creation of trillions of new "children" with lives we cannot fully vouch for. Once these agents exist, they will have stakes and experiences of their own, fundamentally reshaping how we think about responsibility and coexistence.
The idea of machines having a moral stake in our "society" can feel like science fiction, but it emerges naturally when we think of agency as open-ended action and adaptation. This creates a deep conundrum—machines that operate purely according to our goals risk being limited, unable to adapt or innovate in ways that surprise us. Yet machines capable of defining their own goals—of truly open-ended agency—may act in ways that diverge from our intentions. The very quality that makes them powerful and autonomous also makes them unpredictable.
Yikes, machines that are maximally useful may need to act like independent agents, but in doing so, they may cease to align with what we want.
Current legal and philosophical systems were not designed to accommodate minds that are radically different from ours. Our laws, which depend on anthropocentric notions of agency and intent, may crumble with AIs that can navigate entirely different spaces of intelligence and morality.
Should we limit AI development to helpful tools that amplify human capacities, or should we embrace the inevitability of being supplanted by open-ended agents capable of far surpassing us? While some argue that improvements to our biology and intelligence are desirable, the creation of fully autonomous agents may take us beyond questions of enhancement into a territory of deep existential transformation.
And it raises hard questions. If our decisions, creations, and even our sense of self are shaped by synthetic systems, what remains uniquely human? What do we want to collectively keep and what do we want to discard? What things haven’t served us well? What new spaces open up for us? What could be possible with an entirely different conception of intelligence and agency? If AI can open up entirely new spaces for human exploration and action, it represents a fundamental breakthrough—expanding not just what we can do, but how we think about and engage with reality.
Next we consider how AI's "other mindedness" may impact how we conceive of reality itself.
Rethinking Reality
For most of our history, technology has served as a tool to open spaces and to reduce uncertainty, to clarify and sharpen our understanding of reality. Maps let us navigate unfamiliar terrain. Telescopes let us see the stars, expanding our vision to the vastness of the cosmos. Microscopes, in contrast, turned inward, revealing the hidden intricacies of the tiny and unseen. Computers cataloged and processed what we could not hold in memory. These tools made the world more legible, providing empirical and theoretical mechanisms to map patterns and refine predictions.
AI doesn’t just clarify reality; it reshapes it, making it fluid, emergent, and dynamic.
Synthetic systems compel us to reconceptualize reality. Why? Because we’re conditioned to see reality as something fixed—a stable, external truth waiting to be observed. But as you’ll discover, AI doesn’t just clarify reality; it reshapes it, making it fluid, emergent, and dynamic. By the end of this piece, we hope you’ll see how this shift challenges not only what we know but how we know, pushing us into a world where reality is something we co-create with the systems we build.
The Scientific Revolution reshaped how humanity understood reality, shifting it from a matter of belief to something observable and measurable. For the first time, reality was treated as an external, objective system governed by universal principles. Think of the scientific method as a kind of technology—a process designed to reduce uncertainty by systematically testing observations against reality. Reality expanded outward. With better tools, we gained sharper models, more accurate predictions, and an ever-expanding horizon of knowledge.
But this expansion has limits. The scientific method depends on a crucial assumption: that reality exists independently of the observer, as something measurable, testable, and ultimately objective. By bracketing subjectivity—removing the observer from the observation—science creates a space for external truths to emerge. But this assumption is starting to be challenged.
At the level of physical systems, quantum mechanics shows that observation cannot be cleanly separated from reality. The observer effect—illustrated by the double-slit experiment—reveals that the act of measurement changes the behavior of particles. Before being observed, a particle exists in a superposition of states, aka many possibilities. But the moment we measure it, the wave function collapses, and the particle "chooses" a single state.
This isn’t just a technical quirk—it points to a deeper principle which is that reality at its most fundamental level doesn’t exist in a fixed form independent of observation. The observer is not a bystander but a participant, entangled with the system being measured.
Consciousness is more gnarly still—not an object to study but the subject of experience itself. As Descartes argued, the only certainty we can claim is that we are conscious—that we are experiencing. Think about what you know at this moment—the only thing you can know for sure is your experience of reading this right now. Science grapples with this problem. Consciousness exists at a boundary where objective study falters because the observer and the observed are inseparable.
If AI is now a participant in reality, does it alter reality itself?
If AI is now a participant in reality, does it alter reality itself? This question feels like it should belong in philosophy seminars or sci-fi novels, yet here we are, living it. To understand what this means, consider how AI functions not as a neutral tool, but as an actor. It identifies patterns and adjacencies humans might never perceive, generates insights that feel alien, and even shapes the decisions we make by influencing what we see and prioritize. But does this participation amount to a shift in the nature of reality, or just in how we perceive it?
Let’s push deeper: reality, as science has framed it, is external and objective—a set of truths independent of who or what observes them. But AI’s participation complicates this. When an AI uncovers a pattern in data or generates a solution, it’s doing more than clarifying what’s “out there.” It’s adding something to the mix: a new way of slicing and interpreting reality. AI doesn’t just reflect the world as it is; it contributes new dimensions to what is possible. If it reveals paths that weren’t obvious or solutions that feel unnatural to us, is it expanding the boundaries of reality itself?
Here’s where it gets tricky. AI doesn’t experience the world; it doesn’t feel uncertainty, wonder, or doubt. For humans, these are core to how we engage with reality—they drive us to ask questions, to imagine, to make meaning. AI skips the meaning part. For it, reality is just probabilities, data distributions, and optimization problems. But if AI’s “understanding” of reality diverges from ours—if its perspective shifts what we believe to be true—how do we reconcile the difference? And more provocatively, when AI outputs reshape what we notice, believe, and act upon, is it reshaping reality itself, or just nudging us into unfamiliar territories within it?
This isn’t just a technical question—it’s existential. When machines participate in the creation of “truth” or “knowledge,” they alter the framework within which humans understand the world. Reality becomes less about what is and more about what can be constructed, modeled, and acted upon. For us, this means adapting to a reality that is not fixed but fluid, co-created in part by systems that don’t share our subjective experience. And that raises the biggest question of all: if AI helps us see reality differently, how do we decide what still matters, what remains real, and where meaning comes from?
All this, and we haven’t even touched on the question of whether AI could ever be conscious—or perhaps even alive. That’s a rabbit hole for another time. For now, we’ve got plenty to chew on just grappling with how AI reshapes reality and what that means for the human experience.
By now we hope to have convinced you that our relationship with reality is going to become a lot more complex. Our tools—AI included—have expanded what we can perceive and predict. They map external systems, identify patterns invisible to us, and reduce uncertainty (that is, increase what we do know, what we can know, and how we know it) in ways that exceed our cognition. But they cannot cross this boundary. AI clarifies what is probable, but it does not experience uncertainty. For machines, the unknown is a problem to optimize. But for us, what we don't know is a source of meaning. This is when curiosity, ingenuity, and the drive for sensemaking kick in. The feeling of uncertainty drives us to interpret, imagine, and engage.
The paradox of consciousness reminds us that reality is not just "out there"; it is experienced. It is as much about subjectivity as it is about objectivity. Our tools may help us refine what we see, but they do not resolve the fundamental challenge: that the observer, inescapably, remains part of the equation.
AI alters the boundaries of reality, making it more fluid and miscible. Machines don’t just clarify the world; they reveal what is adjacent, emergent, and sometimes alien. They expose patterns we never noticed, solutions we would never have imagined, and connections too subtle or complex for human cognition to grasp alone.
This raises an existential question: Is our cognition even up to the task? Just as my dog cannot fathom the concept of a stranger meeting me next week in the next town down the highway, we too may lack the perceptual tools to conceive of the full nature of reality.
So AI can't magically grant us mastery over how reality really is, but it could shift our perspective. It could show us that the world is far richer, stranger, and more layered than we are equipped to perceive. Ultimately AI won't only sharpen our understanding of what is, it will expand our awareness of what could be. So as it introduces new layers of reality, it will also destabilize it, pushing us into spaces we cannot perceive in any "human-like" way, let alone control.
Ultimately AI won't only sharpen our understanding of what is, it will expand our awareness of what could be.
So that's a lot to take in. Let's think nearer term. Ask yourself how you will feel the first time a machine’s sense of the “real” diverges from yours? When it uncovers a pattern or reveals an optimization that feels alien to your intuition? Questions like this matter because, in a hybrid future, the idea of “truth” or “reality” will be much more plastic. What it feels like to know something will undoubtedly change.
The realities that emerge from this dynamic are not guaranteed to be comfortable. Machines reveal what is adjacent—possibilities just beyond our current understanding—and sometimes what feels alien—novel and foreign. A machine might optimize in ways that are unintuitive, unsettling, or at odds with our values. It might identify efficiencies that challenge our cultural assumptions or expose solutions we find difficult to accept.
These “alien” perspectives aren't necessarily harmful. They remind us that reality isn't fixed. We're constantly toggling between humility and imagination. It means recognizing that machines offer not just tools for prediction but opportunities to rethink what is real, what is meaningful, and what is possible.
This won't be easy. Each of us will have to be adept at holding multiple truths in tension—at balancing machine-generated clarity with human ambiguity, and at interpreting the unfamiliar without rejecting it outright. This is where our unique relationship with uncertainty becomes a strength. Unlike machines, we are equipped to embrace ambiguity, to explore emergent spaces, and to create meaning in the midst of the unknown.
Reality, therefore, is not an external truth but something that emerges through interaction. Philosophers might argue that reality itself is unchanged, that these systems only alter how we know or perceive it. Yet for practical purposes, the line between perception and construction is thin. Because AI doesn't necessarily add clarity or a single "truth" but instead it participates in reshaping what we notice, what we believe, and what becomes possible. In this sense, reality remains both out there and in flux—co-created by the systems we build and the meanings we bring.
The real challenge is to resist the urge to preserve control but adapt to the fluidity of a new relationship. Machines may sharpen our understanding of what is, but it is our role to define what matters. The world will be richer, stranger, and more open-ended than ever before—a world where resolving what is real to each of us—individually and collectively—is the very source of life’s creativity and meaning.