Blaise Agüera y Arcas: What Is Intelligence?
A review of What Is Life?: Evolution by Computation, and What Is Intelligence?: Lessons from AI about Evolution, Computing, and Minds, by Blaise Agüera y Arcas
AI is evolving from models to systems to agents—autonomous entities that act, adapt, and collaborate. As intelligence emerges from interactions, the future of AI depends on context, trust, and workflow integration. How we design social AI today will shape its role in amplifying human intelligence.
When most people think of artificial intelligence, they picture massive models that power search engines, conversational assistants, and even scientific breakthroughs. But intelligence doesn’t live in a single model. It emerges from systems.
AI is evolving through distinct stages. First, models provided the foundation—remarkable pattern recognizers capable of synthesizing vast amounts of information. Then, models became embedded within systems—larger architectures that integrate memory, retrieval mechanisms, and user interactions. Now, we are entering a third phase: the rise of AI agents—autonomous systems that are capable of action, adaption, and collaboration.
This transition marks a shift in how intelligence operates. As AI moves from models to systems to agents, intelligence will no longer be something we extract from a model, but something that emerges dynamically through interaction and adaptation. Just as human intelligence emerges from networks of neurons, AI’s intelligence will emerge from interactions between systems.
When you interact with AI—whether summarizing a report or brainstorming an idea—you’re engaging with a system, not just a model. Your prompt moves through an interface that applies rules, filters, and ranking mechanisms before delivering a response. The intelligence you perceive is shaped by how the system integrates tools, memory, and human input.
The most obvious (and relevant) parallel is how human cognition emerges from the interaction of multiple neural systems. A single memory circuit might encode specific information, but intelligent behavior arises from the dynamic integration of attention, working memory, pattern recognition, and executive control. Similarly, contemporary AI transcends isolated model capabilities through orchestrated systems that coordinate multiple processes. When you engage with an AI system, it's dynamically allocating attention, maintaining contextual awareness, and adapting its processing based on emerging requirements.
This is a critical change in how we should think about AI. A model alone is like an isolated cognitive process—capable of specific computations but limited in scope. A system is more like the integrated cognitive architecture—coordinating multiple processes, maintaining goal awareness, and adjusting its behavior based on context and feedback. This transformation reshapes how we approach AI development, shifting focus from optimizing individual components to designing cohesive systems that can navigate complexity with greater sophistication.
Recognizing AI as a system explains why agents—AI that acts—are so transformative. Instead of merely processing inputs and returning static outputs, agents engage dynamically with their environment, making decisions, adjusting strategies, and seeking solutions rather than waiting for prompts. Agents will transform how we interact with AI—likely as the architecture of products that become the "mainstream" of future AI.
Most AI today is reactive—it waits for instructions, produces results, and stops. Agents, by contrast, act. They seek information, select tools, and adjust their strategies to accomplish goals.
This design shift isn’t happening in isolation but as part of the complex system of human agency. AI’s role in decision-making is expanding, and as problems grow more complex, reactive intelligence hits a ceiling. A search engine retrieves answers, but it doesn’t refine its search strategy based on results. A chatbot generates responses, but it doesn’t adjust mid-conversation. A supply chain system optimizes deliveries, but it can’t anticipate disruptions in real-time. These systems lack the ability to act beyond their programming.
To move beyond passive retrieval, AI must become agentic—capable of seeking, selecting, and adapting to achieve goals. Reactive systems function like finely tuned instruments: they perform precisely when asked but remain passive. Agents, by contrast, act more like collaborators—reframing tasks, taking initiative, and adapting in real-time. This autonomy introduces both power and unpredictability.
Let's look at how agents are already impacting software development with the evolution from early large language models to modern AI agents in software development. While earlier models could generate code snippets when prompted, current agent systems like GitHub Copilot demonstrate markedly different behavior. These systems actively monitor the developer's coding patterns, proactively suggest architectural improvements, identify potential security vulnerabilities, and even refactor code across multiple files to maintain consistency. The agent participates in the development process, learning from the developer's style and project context to offer increasingly relevant assistance.
A model might excel at generating syntactically correct code but an agent system understands the broader context of software development workflows. It can independently identify opportunities for optimization, maintain awareness of best practices, and adapt its suggestions based on the specific requirements of different programming languages and frameworks. We are witnessing a shift from reactive to proactive assistance and it represents both the promise and challenge of agentic AI—greater capability paired with the need for thoughtful integration into human workflows.
Agents can surprise us—they may find solutions no one anticipated or misuse tools in unintended ways. And once multiple agents interact within an environment, their behavior can no longer be fully predicted. It’s the interactions—not just the model—that generate intelligence.
This evolution from isolated models to integrated systems points toward increasingly social forms of artificial intelligence, with emerging research revealing intriguing parallels to biological and social systems. Recent experiments demonstrate AI agents spontaneously developing communication protocols, engaging in basic forms of coordination, and even exhibiting rudimentary social behaviors like turn-taking and information sharing. These interactions emerge from the fundamental requirements of solving complex tasks in shared environments.
Adversarial dynamics in machine learning provides evidence for the computational efficiency of social interaction. Biological systems evolved sophisticated capabilities through predator-prey relationships and competitive adaptation. And we see AI systems demonstrating accelerated learning when placed in adversarial frameworks. Generative Adversarial Networks (GANs) achieve remarkable capabilities through the dynamic interplay between generator and discriminator networks, each forcing the other toward greater sophistication. This computational "arms race" mirrors ecological evolution, where complex capabilities emerge from competitive pressure rather than isolated optimization.
We're now seeing this principle extend beyond traditional adversarial training. Agents develop sophisticated coordination strategies when faced with resource constraints, forming temporary alliances to solve complex problems, and even engaging in basic forms of negotiation when goals partially conflict. These behaviors emerge from the fundamental efficiency of collaborative problem-solving in complex environments. Just as biological intelligence evolved through social interaction, artificial intelligence appears to benefit from similar dynamics.
This perspective suggests that the future development of AI may be inherently social—not because we design it that way, but because social interaction provides computationally efficient solutions to complex problems.
As AI takes on more complex tasks, isolated agents will struggle to solve them alone. Just as human intelligence evolved through cooperation—trading information, delegating tasks, and solving problems collectively—AI will increasingly function within agent societies, where knowledge is distributed, actions are coordinated, and solutions emerge from interaction rather than from any single system. These networks will introduce entirely new forms of intelligence—emergent behaviors that arise only through interaction. Social AI won't just work alongside us but will work with each other.
But there's another twist—the power of distinctly human social intelligence. Language and shared symbols transformed human collaboration, enabling complex coordination through shared mental models, cultural transmission, and recursive thinking. Similarly, agent ecosystems might undergo phase transitions when interaction patterns reach critical thresholds. However, unlike humans, these emergent "societies" of agents operate at computational speeds and scales that challenge our intuitive understanding of cooperation and trust.
Recent research in multi-agent protein folding systems, where networks of specialized AI agents collaborate to predict and design novel protein structures, highlights the power of sociality. Unlike traditional approaches that rely on single models, these systems demonstrate emergent problem-solving capabilities through rapid, parallel exploration and collective validation. Individual agents simultaneously probe different aspects of the protein space—some focusing on backbone geometry, others on side-chain interactions, and still others on evolutionary conservation patterns. What makes this particularly fascinating is how these agents develop specialized "roles" and coordination patterns that weren't explicitly programmed. They autonomously establish information-sharing protocols, cross-validate predictions, and collectively converge on solutions that would be computationally intractable for any single system.
The promise and challenge of social agents lie in their potential to tackle problems beyond the reach of any single system. Imagine networks that coordinate to solve multi-faceted challenges—climate adaptation, global pathogen surveillance, resilient supply chains—integrating data and strategies from countless domains. Yet this same emergence of collective capability raises critical questions about control, alignment, and the nature of machine-to-machine and human-to-machine collaboration. As these systems evolve, we will need to grapple with deep questions about autonomy, transparency, and the boundaries between augmentation and automation.
This transformation suggests we’re not just replicating human social intelligence—we’re witnessing the emergence of a qualitatively different form of collective intelligence. One that operates on vastly different temporal and computational scales, with implications we are only beginning to grasp.
To understand the emergence of social artificial intelligence, we need a framework that captures both the technical progression and the broader dynamics shaping its evolution. This isn't just about mapping technological capabilities—it's about understanding how different dimensions of development interact to create new possibilities and challenges.
Our framework considers three intersecting dimensions:
First Dimension: Technical Evolution
Each stage in this evolution introduces new levels of complexity:
Second Dimension: Interaction Dynamics
Third Dimension: Development Tracks
These tracks represent the parallel forces that shape the evolution of social AI:
The interplay of these dimensions creates what we call "emergence spaces"—areas where new capabilities and behaviors arise from the interaction of simpler components. These spaces are characterized by:
This framework helps us:
Through this lens, we can better understand not just where social AI is headed, but how different factors might influence its development and impact. The rest of this paper will explore each dimension in detail, examining how they interact to shape the future of artificial intelligence.
Intelligence has many definitions depending on your angle. For some, it is a measure of problem-solving ability. For others, a capacity for adaptation or creativity. But looking at intelligence through the lens of complexity theory reveals something deeper—intelligence emerges wherever information processing meets purpose, whether in biological cells or silicon circuits. This emergence isn't random or unpredictable—it follows architectural principles that we can observe across different scales and systems.
Three core capabilities define intelligent action: perception, reasoning, and action. These components, working in concert, create the foundation for increasingly sophisticated forms of intelligence.
Perception is the gateway through which systems understand their environment. In artificial systems, this means more than simply collecting data—it involves processing and interpreting complex, often noisy information streams. Advanced perceptual systems combine multiple input channels, filter irrelevant information, and extract meaningful patterns from raw data. Just as humans integrate visual, auditory, and other sensory inputs to form a coherent picture of reality, AI systems must synthesize diverse data streams to build useful models of their operational environment.
Reasoning processes this perceptual information to draw conclusions and make decisions. This cognitive dimension involves more than pattern matching or statistical inference. True reasoning requires the ability to combine logical analysis with probabilistic thinking, to deal with uncertainty while maintaining coherent goal-directed behavior. In artificial systems, this means balancing fast, intuitive processing with slower, more deliberate analysis—similar to what psychologists call System 1 and System 2 thinking in humans.
Action is the ability to effect change in the world based on perceptual inputs and reasoned decisions. This isn't just about executing pre-programmed responses; it involves adapting behavior based on context, learning from experience, and adjusting strategies to achieve goals. In advanced systems, action capabilities include sophisticated planning, strategic decision-making, and the ability to coordinate complex sequences of behaviors.
Complex systems rarely evolve in straight lines. Instead, they move through phases of rapid growth, stagnation, and adaptation. As systems become more sophisticated, the interaction between perception, reasoning, and action becomes increasingly complex. Simple systems might exhibit a linear flow: perceive, reason, act. But advanced intelligence involves constant feedback loops and parallel processing. Perception influences reasoning, which guides action, which in turn affects what is perceived—creating a dynamic, self-modifying system.
In nature, simple rules give rise to complex behavior. A single ant follows pheromone trails, but an ant colony collectively optimizes food-gathering strategies, adjusts to environmental changes, and builds intricate structures—none of which any individual ant understands. Intelligence emerges not from a single unit, but from the interactions between units.
The same is true for AI. When an agent operates alone, its intelligence is bounded by its own capabilities. But when agents interact—exchanging information, adapting to new conditions, and influencing one another—emergent intelligence arises. Like ant colonies or market economies, multi-agent AI systems will display behaviors beyond the sum of their parts.
Consider a modern AI agent approaching a complex task like optimizing a supply chain. It must:
Each component influences the others in real-time, creating a dynamic system that can adapt to changing conditions. The intelligence of the system isn't located in any single component but emerges from their interaction.
Understanding these components helps explain the progression from simple AI models to more sophisticated systems. Early AI focused on optimizing individual components—better pattern recognition, faster processing, more precise control. But true intelligence requires integration and balance. A system with powerful reasoning but limited perception will make poor decisions despite its cognitive capabilities. Similarly, excellent perception and reasoning are wasted without effective action capabilities.
This insight is driving a shift in AI development. Instead of pursuing raw computational power or bigger models, researchers are focusing on architectures that better integrate these core components. This means developing:
As these systems become more sophisticated, they begin to exhibit properties that mirror biological intelligence:
This architectural understanding sets the stage for the emergence of social intelligence. When multiple intelligent systems interact, they create new possibilities for perception (shared sensing), reasoning (collective problem-solving), and action (coordinated behavior). The challenge lies in designing architectures that can support this social dimension while maintaining stability and alignment with human values.
This requires careful attention to:
The architecture of intelligence, therefore, isn't just about individual components or systems—it's about creating frameworks that can support increasingly sophisticated forms of collective intelligence. As we move toward more complex and interconnected AI systems, these architectural principles become crucial for guiding their development in beneficial directions.
The gap between biological and artificial systems remains significant. Biological systems have evolved over millions of years and have fine-tuned a multi-layered approach to managing uncertainty. Current AI systems, even at their most sophisticated, haven't mastered this depth of adaptive prediction. They excel at pattern recognition and optimization within defined domains, but they haven't yet achieved the fluid, contextual intelligence that characterizes living systems.
This gap reveals why focusing on individual AI models misses the point. Just as biological intelligence emerges from networks of interacting components, artificial intelligence requires dynamic integration with real-world complexity. As we connect AI systems into broader networks, allowing them to share information and adapt collectively, we begin to see glimpses of emergent intelligence that transcends individual capabilities.
The evolution of social artificial intelligence isn’t a single, predictable trajectory. It unfolds along four parallel tracks, each influencing and shaping the others in complex ways. Technical capability alone won’t determine AI’s future—human adoption, market dynamics, and cultural perception are just as crucial.
These tracks don’t advance in sync. A breakthrough in AI reasoning might outpace regulatory responses. Cultural skepticism could slow adoption, even when technology is ready. This misalignment creates both opportunities and risks. The challenge is not just making AI more capable, but ensuring it is integrated responsibly into society.
AI development is shifting from monolithic models to modular, adaptable systems. Engineers are building agentic architectures—AI that can reason, plan, and collaborate dynamically. Key developments include:
While technical progress moves fast, it doesn’t guarantee adoption. Many innovations stall when they don’t align with human needs, regulatory frameworks, or cultural expectations.
Adoption isn’t just about implementation—it’s about trust, usability, and social fit. Some industries (logistics, research) are moving quickly, while others hesitate due to concerns over transparency and control. Key challenges include:
The success of AI agents depends not only on their technical sophistication but on whether humans can adapt to working alongside them.
Governments and businesses face urgent questions: Who is accountable when an AI agent makes a decision? What safeguards must be in place? These issues will shape AI’s evolution as much as the technology itself.
The right policies can accelerate AI adoption by building trust, while regulatory uncertainty can slow progress and stifle innovation.
Perhaps the most subtle yet profound factor shaping AI is how we perceive and interact with it. As agents become more autonomous, they stop feeling like tools and start feeling like participants.
AI’s trajectory will be shaped not just by what it can do, but by how society frames its role. Cultural narratives—whether they present agents as trusted allies or existential threats—will profoundly influence public acceptance and adoption.
We've been thinking about how the scaling of agent systems might lead to the emergence of social agents. This isn't just an extension of current capabilities—it's a potential shift in how AI systems interact with one another and with us. To frame this, it's worth looking back at how social intelligence arose in humans, not as a fixed trait but as an emergent property of interaction, cooperation, and shared knowledge. We believe this analogy offers a roadmap for understanding what could happen as agent systems scale.
The promise of social agents is that they could tackle problems beyond the reach of any single system. Imagine networks that coordinate to solve multi-faceted challenges—climate adaptation, global pathogen surveillance, resilient supply chains—integrating data and strategies from countless domains. Here, the non-linear scaling of capabilities becomes a force multiplier, enabling holistic solutions.
Complex systems, like ecosystems or markets, show us that intelligence thrives in networks. AI will follow a similar path. Rather than building bigger, monolithic models, the future lies in creating modular systems. Smaller components—each specialized for a task—work together to solve problems. This modularity offers resilience. A single component can be updated or replaced without disrupting the entire system. It's also adaptive. Modular systems can adjust to new environments, regulations, or needs more quickly than static architectures.
Yet emergent complexity also means losing some predictability. Humans struggled for millennia with the unintended consequences of their own social systems—conflict, inequality, environmental degradation. Similarly, we must accept that social agents may produce outcomes we neither anticipate nor fully understand. When systems interact in unexpected ways, their behavior becomes harder to predict—much like financial markets or biological systems.
The path to agents will not be smooth. Small events—an agent solving a major problem or making a critical mistake—can ripple through the system. A success might lead to rapid adoption, while a failure could trigger stricter oversight. These moments of feedback create tipping points where change accelerates or stalls. Complex systems rarely evolve in straight lines. Instead, they move through phases of rapid growth, stagnation, and adaptation. For AI, this means we need to prepare for branching paths. In some areas, agents might thrive; in others, they might face resistance or strict regulation.
Just as human societies developed institutions—markets, governments, schools—to harness collective intelligence, we may need new institutions for human-agent cooperation. This might mean dynamic regulatory bodies that update standards as agent capacities evolve, or collaborative platforms where humans and agents jointly deliberate complex problems. The goal isn't to hand over the reins to AI but to integrate agent reasoning into our collective decision-making, enhancing human capabilities rather than displacing them.
The rise of agents changes the questions we ask about AI. It's no longer just, "What can this tool do?" Instead, we ask, "How do we work alongside systems that act independently?" Agents force us to rethink trust, collaboration, and responsibility. As agents take on more autonomy, they stop feeling like tools and start feeling like participants. This shift changes how we trust and use them. Cultural narratives—whether they frame agents as helpful collaborators or risky disruptors—will shape adoption.
We have an opportunity to shape the trajectory of social agents. Human history shows that social interaction can transform not just how we solve problems, but how we think, learn, and evolve as communities. We're now poised to see if AI can undergo a similar transition. Will agent networks become vibrant, adaptive collectives of problem-solvers, accelerating innovation and tackling challenges at scales we can barely imagine? Or will complexity and misalignment create a new set of problems, requiring even more careful stewardship?
The future of social agents is not predetermined. By embracing complexity, acknowledging parallel tracks of development, and preparing for non-linear shifts, we can guide the emergence of agent societies toward outcomes that enhance, rather than undermine, human flourishing. The lessons from human social evolution are not exact blueprints, but they remind us that real intelligence often comes not from isolated minds, but from the interplay of many, working together in ways no individual could fully foresee.
The shift from isolated models to multi-agent systems redefines AI design. The goal is no longer just increasing computational power—it’s creating AI that enhances human intelligence and collaboration. As systems become more interconnected and autonomous, the focus must shift from technical optimization to human-centered integration. This requires a design framework that prioritizes human needs while acknowledging the complex, emergent nature of social AI systems. Building on our understanding of how intelligence emerges from system architecture, we must now consider how to design these systems for effective human collaboration.
Three principles define human-centered AI: context awareness, trust development, and workflow integration. Together, they ensure AI enhances rather than diminishes human intelligence and agency.
Context Awareness: Understanding Environmental Complexity
For AI to function effectively, it must understand more than raw data—it must grasp the broader context: organizational dynamics, user intent, shifting priorities, and multi-agent interactions. Without this, even advanced AI can make misaligned decisions.
Key aspects of context awareness include:
For example, in financial services, an AI system must understand not just market data but organizational risk tolerance, regulatory constraints, and decision-making timeframes. Similarly, in healthcare, diagnostic systems must integrate patient histories, lifestyle factors, and evolving medical research to provide meaningful insights.
Trust Development: Creating Interpretable Systems
As AI gains autonomy, trust depends on more than accuracy—it requires transparency. AI must signal how it operates, where its limitations lie, and how it makes decisions. Without this, humans may either over-rely on AI or reject it entirely. The key is balancing confidence with healthy skepticism.
Essential elements of trustworthy systems include:
Legal and medical applications particularly demonstrate the importance of trust development. Legal research systems must provide clear reasoning chains and source citations, while medical AI must communicate confidence levels and potential alternatives rather than presenting singular conclusions.
Workflow Integration: Enhancing Human Processes
An AI system's effectiveness hinges on how well it integrates into human workflows—not just automating tasks, but enhancing decision-making and reducing cognitive friction. Poorly integrated AI disrupts work; well-designed AI empowers it.
Critical aspects of workflow integration include:
Project management and medical diagnosis systems exemplify effective workflow integration, where AI serves to augment human expertise rather than replace it, providing additional insights while preserving professional judgment.
As AI agents interact within complex systems, emergent behavior may be inevitable. The key challenge is not stopping emergence, but guiding it toward beneficial outcomes. Context awareness helps prevent unintended consequences, trust ensures transparency in how emergent patterns evolve, and workflow integration ensures AI adapts to human priorities rather than disrupting them.
This management of emergence becomes particularly important as systems scale and interact in increasingly complex ways. Organizations must develop frameworks for monitoring and guiding emergent behaviors while maintaining alignment with human values and goals.
Cultural and Organizational Dimensions
The implementation of these design principles must account for varying cultural and organizational contexts. Different societies and organizations may have distinct approaches to human-AI collaboration, requiring flexible design frameworks that can adapt to various cultural norms and organizational structures.
Key considerations include:
Future Adaptability
As AI systems continue to evolve, these design principles must scale accordingly. Context awareness will need to expand to encompass broader societal implications, while trust development mechanisms must adapt to handle increasingly autonomous decision-making. Workflow integration will need to evolve to support new forms of human-AI collaboration that we cannot yet fully envision.
This scaling presents both opportunities and challenges:
The future of AI design isn’t just about making systems more powerful—it’s about ensuring they serve human needs. Social AI requires careful design, balancing emergence with alignment, autonomy with oversight, and intelligence with intuition.
By prioritizing context awareness, trust development, and workflow integration, we can create agent systems that don’t just function efficiently—but empower human decision-making, collaboration, and creativity.
As AI advances, so must its design. This isn’t just a technical challenge—it’s a philosophical one: will AI amplify human intelligence or diminish it? The answer lies in the choices we make today.
Writing and Conversations About AI (Not Written by AI)