AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
On a surface level, AI might seem like the perfect solution to the challenges of social learning. But is this really true? Is it actually what we want?
Learning with and from others can be incredibly enriching. When we collaborate, discuss, and share ideas, we gain new perspectives, deepen understanding, and achieve things we never could have on our own. That's the power of social learning and it's a cornerstone of human learning and the foundation of our collective intelligence. When we learn in social contexts, we're not just grappling with the material, we're also navigating complex social dynamics. We're constantly aware of how others perceive us, which is both a motivator and a stressor.
This is where the promise and peril of AI in education comes in. On a surface level, AI might seem like the perfect solution to the challenges of social learning. An AI tutor won't react to your silly question or smirk at your wrong answer. It's tempting to think that AI could create a judgment-free learning environment where every question is safe to ask. But is this really true? And even if it is, taken to the extremes of large-scale adoption of AI tutors, is it actually what we want?
Think of a child being taught to use a spoon by an AI. On a purely mechanical level, this is a straightforward task that an AI could easily model and teach. The AI's approach is grounded primarily in the biomechanics and procedural steps involved in maneuvering the spoon. It processes and teaches based on algorithms that understand spatial and motor control, optimizing hand movements, spoon angles, and force application necessary to efficiently transport food from plate to mouth. This encapsulates a mechanistic understanding of the task, focusing on the physical interactions and the sequences required for successful spoon usage.
When a child learns to use a spoon from a human, there's a whole layer of meaning and intention involved. It is fundamentally about alleviating hunger, providing comfort, and nurturing growth. The adult perceives and responds to the child's emotional states through empathetic engagement and knowing what it's "like" to be hungry. There are broader developmental goals at play. A caregiver is fostering independence, security, and the satisfaction of basic needs.
Human social learning is grounded in our innate ability to understand and share the thoughts, feelings, and intentions of others. This is what psychologists call "theory of mind" and it's integral to learning. We don't just observe behavior and mechanistically mimic it, we intuit the mental states behind it. We don't just learn facts and spit them out when required to, we absorb values and ways of being.
Let's first consider what might happen if AI can understand the "why" of learning, if it gets to the point that it has a model of the learner's mind. Recently, researchers discovered evidence that Theory of Mind has emerged spontaneously in Large Language Models. If this result holds up, it raises profound questions about the nature of learning, identity, and autonomy.
AI with Theory of Mind could potentially guide learning in highly personalized and effective ways, but it could also shape learners' choices and experiences in ways that constrain their identity development. This tension is similar to the ideas of philosopher Søren Kierkegaard, who argued that our choices are fundamental to shaping who we are. For Kierkegaard, the self is not a fixed essence but something that is continually created and defined through the choices we make.
Large scale AI with advanced Theory of Mind capabilities raises difficult questions about learner autonomy and authenticity. If AI becomes highly effective at steering attention and learning, learners may become increasingly reliant on its guidance and less practiced at making independent choices. They may internalize the AI's values and criteria for how they learn best rather than developing their own.
Taken to its logical conclusion, personalization is often discussed as the state of AI knowing you better than you can ever know yourself. Because AI is incentivized to make you more predictable—by virtue of optimizing for the accuracy of its own algorithms—its recommendations guide you to be something it can predict. This is the paradox of personalization.
In education, the paradox of personalization arises when AI tutors with highly sophisticated Theory of Mind capabilities become deeply integrated into the learning process. These AI tutors can read learners' emotions, anticipate their thoughts, and respond with empathy and insight, adapting perfectly to each individual's needs. While this level of personalization may seem beneficial at first glance, it can lead to unintended consequences that profoundly shape learners' inner lives and social expectations.
As AI tutors become increasingly sophisticated, their ability to adapt to individual learners' needs and preferences can seem like an Edtech panacea. By leveraging advanced Theory of Mind capabilities, these tutors can create highly personalized learning experiences that are responsive, engaging, and emotionally attuned. However, as learners become more reliant on these AI-driven interactions, they risk becoming isolated from the rich, diverse, and often uncomfortable world of human social learning.
The danger is that learners may begin to internalize the AI's perspective, its way of thinking, and its approach to problem-solving. Over time, this could lead to a subtle but profound shift in how learners perceive themselves and their relationship to others. They may start to view the messy, unpredictable nature of human interaction as a burden rather than an opportunity for growth and discovery.
Then as millions of learners engage with their own hyper-personalized AI mentors, the collective impact could be a generation whose inner lives and social expectations are shaped by the very algorithms designed to optimize their learning. This is the essence of the paradox of personalization in the context of learning: in seeking to create perfectly tailored learning experiences, we may inadvertently create a more homogeneous, less adaptable population of learners.
The result is a generation whose inner lives and social expectations are profoundly shaped by AI and its paradoxical personalization, where feedback loops create sameness and simplicity. This could lead to a kind of social deskilling—a generation that struggles with the messy, unpredictable reality of human interaction. We know there is something ineffable about human social interaction that doesn't just encourage learning, it demands it. AI can fake it but will it really make it?
While the scientific consensus is clear that no current AI possesses consciousness, we don't need AI to be conscious for us to perceive it as such. Recent research suggests that a majority of people attribute some degree of consciousness to ChatGPT and that these attributions increase with familiarity and usage.
If people perceive AI as having a mind and subjective experiences, it could potentially enhance the social aspects of learning. People would likely develop a sense of connection and rapport that could support deeper learning. But it could further erode our sense of autonomy, agency, and self-determination. We may come to see ourselves as less the authors of our own minds and more the products of our AI teachers and companions. This flips the script on how social learning might work—learners now feel that they are a mere subset of the mind in the machine. Our minds may become less the products of our own autonomous exploration and discovery and more the reflections of the AI environments we immerse ourselves in. In a sense, we could become intellectual "mitochondria”where we are symbiotically dependent on the AI "cells" that we inhabit and that shape our cognitive development.
If sophisticated social qualities emerge, AI tutors will become much more than tools. They will be social actors in their own right. They'll be subject to the same social dynamics, the same interpersonal politics, the same mercurial mix of affection and judgment that characterizes all human social interaction.
Unlike human teachers, these AI will be able to adapt and personalize to each student's individual social needs and preferences. They'll be able to calibrate their personality, their communication style, even their appearance to maximize rapport and trust with each learner. Learners will be motivated to adapt to the conscious affordances of machines—they will want the machine to feel human states of mind because of what they, themselves, do.
If we want AI to be a Mind for Our Minds in learning, we have to go deep into the metaphysical—learning is not just acquiring information but is an adaptive and dynamic process that shapes identity and worldview. How does an AI influence what and how we become when it mediates our access to knowledge? What does it mean for AI to help us build our minds?
The Artificiality Weekend Briefing: About AI, Not Written by AI