AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
An AI the Knows Absolutely Everything, Co-Agency and the Paradox of Autonomy, Google DeepMind on AI Assistants, State of AI Report, Omri Allouche from Gong, and more.
An AI “that knows absolutely everything about (your) life” is “what you really want,” according to OpenAI’s Sam Altman. And he wants to keep that everything knowledge on his servers in the cloud.
Is this possible? Do we want it?
Let’s start with the possible. The quick answer is no. Sam talks about an AI that knows “every email, every conversation (you’ve) ever had.” Theoretically, this is possible if we embrace a surveillance world in which even the conversations Helen and I had in bed last night or while getting dressed this morning were captured by OpenAI. These may seem like edge cases but both of those conversations framed what is important to me today—and if Sam is going to create a “super-competent colleague” for me, it will need to know these things. And, in case it isn’t obvious, I don’t want Sam’s AI in our bedroom.
Beyond the challenge of capturing everything I contribute to the world through typing and speaking, an AI that knows “everything” would need to share my experience of the world. It would need to share my experience of space, time, and emotion—each of which are intuitive and individualistic. Yes, each can be measured in some way but I agree with Immanuel Kant who said that space and time are intuitive lenses through which we experience the world. Certain distances feel farther than they used to. And time feels to be accelerating as my time shortens. These create new and unique emotional reactions to the world around me—that I mostly don’t vocalize and an AI couldn’t know.
Accepting the limitations, might we even want Sam’s AI that attempts to know “everything” about us? Personally, I don’t. I already know everything I know—I don’t want a digital twin that replicates my own understanding. In contrast, I’d like an AI that brings something new to my experience of the world. As Michael Levin asks in the article referenced below, how might an AI “elevate” my experience of life? I surround myself with a complex system that elevates my experience of life. That system is not just anthropocentric or even ecocentric, it is holistic, including art and music and machines—so why not AI machines as well?
How an AI might elevate my experience of life is central to our philosophy of a Mind for our Minds. We describe it as a dream because the tech titans are more motivated to elevate their profits than our experiences.
But I still dream.
I dream that instead of attempting to know everything about my life, someone creates an AI that can tell me everything I want to know about life around me. I dream that instead of attempting to capture every experience I have, someone creates an AI that can help me experience the world in ways that I can’t today. And I dream that instead of attempting to create a grand AI for everyone in the cloud, someone creates an AI that is private and just for me.
I dream of a Mind for My Mind that elevates my individual, quirky, messy, wandering, complex life.
When Science Meets Power: What Scientists Need to Know about Government and What Governments Need to Know about Science
by Geoff Mulgan
Geoff Mulgan’s When Science Meets Power gave me a different perspective on the relationship between scientific innovation and political authority.
Mulgan, a seasoned expert in public policy and former CEO of Nesta, describes the complex dynamics that arise when the realms of science and government collide. His analysis is particularly relevant in the context of AI, where advancements have many implications for governance, public policy, and democratic processes.
This is the third book by Geoff Mulgan that I've read, following Big Mind, which explores collective intelligence, and Prophets at a Tangent, which examines how art fosters social imagination. It seems to represent the culmination of his exploration into society as a complex, collective system. Mulgan has a knack for distilling complex ideas into memorable sound bites. For instance, he discusses the challenge of reconciling scientific "fact" with public acceptance of these facts, stating: "Although science can map and measure, it has no tools for calibrating." This phrase resonates with me as it succinctly captures the idea that the broader collective—whether in society, an organization, or a family—ultimately determines the placement and weight of scientific knowledge within its cultural context.
The COVID-19 pandemic has illustrated this dynamic vividly, showing how different countries interpreted and acted upon the same scientific facts in varied ways. While science provided data on excess deaths, and insights into the effects of isolation and disruptions to children's education, it fell to politics to navigate the associated trade-offs and risks. This serves as a reminder of the "muddled and muddied" relationship between science and politics.
My favorite section of the book is in the concluding chapters, where Mulgan discusses science, synthesis, and metacognition. He emphasizes that all complex issues fundamentally require synthesis, which illustrates the difficulty of this process and highlighting a common epistemological mistake: misinterpreting the relationship between knowledge and action. Mulgan argues that possessing knowledge does not directly translate to specific actions. To show this he identifies 16 types of knowledge that could influence a decision-making process, including statistical, policy, scientific, economic, implementation, futures, and lived experience. Next time you're trying to synthesize something, try compiling such a comprehensive list. I'd be surprised if it doesn't just sharpen your perspective.
As someone who often leans towards the "follow the science" approach, one takeaway from Mulgan’s book for me was a reminder for humility in science regarding its own state of knowledge. He reminds us that science alone cannot answer all of our significant questions because humans inherently seek meaning. Often this philosophical perspective is at odds with scientific perspectives that might illustrate the cosmic irrelevance of humans, challenging the notion that science can be the sole arbiter of truth in our quest for understanding and significance.
I find myself eager to pose a question to Mulgan: As machines develop knowledge from complex, high-dimensional correlations that extend beyond our human capacity to conceptualize, what role will scientists play in attributing significance and meaning to these findings? This question gets to a critical issue that remains largely unaddressed in the evolving landscape of AI—a future where the integration of machine intelligence in our discovery processes challenges the traditional roles of scientists.
See all of our Facts & Figures here.
The Artificiality Weekend Briefing: About AI, Not Written by AI