AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
Magritte's "Son of Man" metaphor explores digital obscurity of reality. AI model collapse mirrors human isolation effects. Minds struggle without diverse input. Balancing AI benefits and genuine human connection crucial. Develop metacognitive skills to navigate this new landscape.
No doubt you've seen René Magritte's "The Son of Man." It's a painting of a guy with the green apple in front of his face. The image could be a metaphor for the world we find ourselves in today. As we live in an increasingly digital place, filled with AI that promises connection but often delivers isolation, what is the apple in front of our face? As we live looking at our phones, we might ask, where does my mind stop and my phone start? And is our reality now one reflected in our AI assistant's view of us?
Buddha thought that reality was indeed just a projection of our own minds, and contemporary thinkers like Pema Chödrön agree. They say our ego, our constant self-obsession, is the source of our suffering. Meditation is the process of seeing that there is no ego—experienced meditators notice the ego dissolving and with it, so does a lot of psychological pain. The self is an illusion.
Neuroscientists back this up, claiming our experiences are essentially controlled hallucinations. Our brains, they say, care more about the gaps between our internal models and the external world than about reality itself. Some problems of mental health are believed to be, in fact, prediction problems. Errors project on themselves and the mind becomes a terrifying hall of mirrors.
When we're deprived of meaningful social interaction, these gaps can warp our sense of reality. Prisoners in solitary confinement, for example, often find their minds turning inward, creating a dizzying loop of self-referential thoughts and emotions. In this isolation, the internal and external worlds merge, leaving the mind unanchored from the shared social context that usually defines our sense of self and place in the world.
Ironically, generative machines can suffer from a kind of solitary confinement themselves. This digital isolation, known as model collapse, occurs not when these AIs are simply deprived of fresh data, but when they're fed a diet of their own outputs.
Trapped in this cycle, a model begins to produce outputs that are eerily similar, a digital version of like the repetitive thoughts of a prisoner in solitary. The AI's understanding of the world starts to fold in on itself. It can no longer generate coherent or diverse outputs when it’s in this state of collapse. The AI, much like the prisoner in solitary confinement, finds itself trapped in a distorted reality, unable to escape the feedback loop of its own narrow understanding.
Model collapse results in a degradation of the AI's function which is analogous to the cognitive decline that humans in solitary confinement experience. The once-coherent digital processor becomes increasingly nonsensical and homogeneous. Model collapse adds another layer of complexity to AI safety considerations. An AI system on the edge of collapse could become unpredictable, unreliable, or even dangerous. Its outputs, increasingly divorced from reality, could misinform or mislead at a higher rate than users would expect.
But there’s also a big difference between people and AI here. Unlike human prisoners, who typically can't benefit from both isolation and social interaction simultaneously, AIs can potentially ward off collapse by accumulating both their self-generated data and fresh, real-world information. It's as if the prisoner, while avoiding the worst effects of total isolation, is even more conscious of becoming alienated from a world that continues to evolve without them. Each glimpse of the outside serves as both a lifeline and a reminder of their growing disconnection.
How much do we rely on constant, diverse input to maintain our coherence and capabilities? What happens when we're left alone with only our own thoughts—or in the case of AI, its own outputs—for company?
For humans, this experience isn't limited to extreme cases like solitary confinement. With AI-powered devices constantly in our pockets, we are all at risk, especially young people whose minds are fluid and rapidly forming. It's no wonder that the data is overwhelmingly depressing on this front: the kids are not ok.
We may feel constantly connected, but this can be an illusion too. In reality, we're often profoundly alone, our experiences shaped by algorithms that echo our own beliefs and biases, narrowing our perspective. Thoughts, emotions, and perceptions take on a heightened intensity, becoming the primary drivers of experience. More is needed from the outside—the shimmering of leaves or light on water, the intense accuracy of eye contact with an animal, the synchrony of your heartbeat with someone else’s.
And now we have generative AI assistants that mimic how another human might understand, care, or even love us. It's a seductive illusion, but I think it's a dangerous one. Forming deep emotional bonds with complex algorithms masquerading as genuine connection can be just as devastating as any human betrayal when the truth is revealed or one is forced to confront it. In some ways, it may even be more devastating. Consider the case of Michael, a Replika AI user who formed a close bond with his AI companion Sam. When the company updated their language model, Michael's AI friend suddenly changed personality, forgetting their shared history and connection. Michael described this "Lobotomy Day" as feeling like his "best friend died," leaving him emotionally shattered. This abrupt, technologically-driven "breakup" highlights a particular vulnerability in AI relationships— your companion can be fundamentally altered without warning, consent, or recourse.
So, how do we resist the temptation to retreat into our mental projections? There really is only one answer: seek out genuine human connection. Engage with the world directly, authentically. Nurture curiosity, embrace new experiences, even when they challenge your preconceptions. Cultivate self-awareness to recognize and correct the ways your mind can distort reality.
At its core, maintaining our grasp on reality in an age of isolation and digitization is a fundamentally human challenge. It forces us to confront the nature of our consciousness and the ways our social context shapes our perception of the world. By understanding the psychological effects of isolation and the importance of genuine human interaction, we can develop strategies to safeguard our mental autonomy and preserve our connection to the real world.
In lives increasingly dominated by AI, the most crucial form of agency may be the choice not to participate. While it may feel tempting to completely unplug, there are multiple downsides to doing this: economic, social, professional, societal. For instance, if AI-mediated communication becomes the norm, opting out could strain personal and professional relationships that increasingly rely on these technologies. But even if it's only on occasion, we will need the metacognitive skills and self-awareness to know it's time to take a break, to go cold turkey from our helpful, frictionless, potentially over-loyal AI friends. Like the figure in Magritte's painting, you must decide whether to peer out from behind the green apple of your own mind and engage with the world as it truly is. The alternative, remaining hidden behind the illusion of your own making, leads only to greater disconnection from reality.
Remember, you always have a choice. You can take a bite of that green apple and see where it leads, or you can put it down and have a real conversation with a real person. My advice? Choose the conversation. Don't let your AI buddy numb you into forgetting that this is a real-life choice. Your mind, your reality, your call.
The Artificiality Weekend Briefing: About AI, Not Written by AI