AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
The Intimacy Surface, How to Use Generative AI, Part 9: Reflect, John Havens: Heartificial Intelligence, Improve Your Prompts with Many-Shot In-Context Learning, The Imagining Summit Preview: Jamer Hunt, Helen's Book of the Week, and Facts & Figures about AI & Complex Change.
Generative AI has hit an interesting moment. It's hard to dismiss the power of these new systems. But plenty of people wonder, "Is this it?" Yes, these systems can perform impressive tasks but they are also leagues away from their promised, future usefulness.
The dominant narrative from Silicon Valley is that more compute and more data will scale performance to reach the elusive, fantastical state of AGI that will be able to do everything for everyone. Secondary narratives include new model structures or training techniques that improve upon the current architectures, processes, and technologies. The problem with all of these narratives is that they miss the point of what is required to deliver what a user truly needs—systems with a deep and intimate understanding of them.
Our view is that the future of AI—and technology more broadly—is the Intimacy Economy. As we have had to pay with our attention to make the most of the Attention Economy, we will need to pay with our intimate information to make the most of the Intimacy Economy. Why? Because the more you tell an AI system about the context of you, the better it will be able to interpret the intent of your interaction and help you in your current workflow. This deep understanding of each of us by machines—and the required trust in those machines—will be the foundation of the Intimacy Economy.
This is where Silicon Valley's dominant path of bigger, global models fall short. More world data won't necessarily make AI systems work better for me. AI systems need to know more about me to work better for me. This shouldn't be much of a surprise. A human assistant can only assist me if they understand the nuances of my priorities, needs, and desires. A human teacher can only help me learn if they understand my current knowledge and my learning goals. We understand these human requirements intuitively. But how can we think about applying them to machines?
We think of this as the Intimacy Surface. We use the word "surface" in this context to mean a dynamic, multidimensional interface between humans and machines. It's not necessarily limited to the physical or digital interface, but a conceptual space that includes all of the contact between humans and AI. It's a malleable, responsive surface that can shift, allowing the user to choose the level of intimacy based on the nature of the interaction. The Intimacy Surface adapts to the user's level of trust and willingness to disclose as well as their needs and desires in context. This surface is highly sensitive to emotional resonance, contextual understanding, and mutual trust, allowing for deeper engagement, not just functional interaction. It is characterized by its ability to facilitate increasingly meaningful, personalized, and impactful exchanges.
The Intimacy Surface is composed of five key dimensions:
I wouldn't be surprised if you find my "imagines" far-fetched. You might think that these imagines are incredibly difficult to deliver. Perhaps you think they will forever be in the land of science fiction or dreamlandia—and that might be true. But the purpose of imagining like this is to imagine something that we might want to help direct how technology is developed. Because if we can imagine it, we have a shot at creating it. In fact, Helen goes as far as saying that, given this technology is coming at us like it or not, the only responsible thing to do is start some hardcore imagining!
The stakes of getting this right are enormous. A well-designed Intimacy Surface has the potential to be truly helpful and useful. Imagine an AI therapist who never tires, never judges, and remembers every detail of your life story. Or a digital tutor that can understand your interests to craft a learning journey that helps you explore the world. These AI systems could provide personalized support in ways we can scarcely imagine.
But the risks are equally significant. A world where AI knows us better than we know ourselves is a world vulnerable to unprecedented levels of manipulation and control. Whoever controls the most intimate data about the most people would wield extraordinary power. As George Orwell warned, "Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing." Are we inadvertently creating the tools for such power, handing our intimacy to whoever promises enhanced productivity?
I believe we would do well to remember the words of Martin Heidegger, who warned against the "enframing" power of technology—its tendency to reduce everything, including humans, to resources to be optimized and exploited. The Intimacy Surface offers us unprecedented tools for self-knowledge and growth, but we must ensure that in our quest for digital intimacy, we don't lose touch with the very thing that makes us human: our capacity for authentic choice and meaning-making.
In October, we will undertake a bold idea—imagining a hopeful future with AI. But who is that hopeful future for? Individuals? Organizations? Society? The planet?
Our opening discussion with Jamer Hunt will help answer these questions. Jamer, author of Not to Scale and inspired by the Eames's film Powers of Ten, will catalyze our opening discussion on the concept of scale. This session will delve into how different scales—whether individual, organizational, community, societal, or even temporal—shape our perspectives and influence the design of AI systems. By examining the impact of scale on context and constraints, Jamer will guide us to a clearer understanding of the appropriate levels at which we can envision and build a hopeful future with AI. This interactive session promises to set the stage for a thought-provoking conference.
Check out the agenda for The Imagining Summit and send us a message if you would like to join us. We're excited to meet our Artificiality readers in person!
As a preview of Jamer's ideas on the power of scale, listen to our podcast with him.
Heartificial Intelligence: Embracing Our Humanity to Maximize Machiness" by John C. Havens
Despite being published in 2016, this book remains an essential read. As AI shapes our collective future, "Heartificial Intelligence" offers a comprehensive understanding of the ethical challenges and responsibilities we face. Havens emphasizes the importance of incorporating ethical considerations into AI, a topic that has only grown in significance. Havens uses fictional vignettes to explore potential future scenarios involving AI, prompting readers to consider their responses to complex ethical dilemmas. These scenarios remain relevant as they provoke critical thinking about the trajectory of AI and I, for one, appreciate the foundations he presents on important topics such as privacy, data security, and the emotional impact of AI decisions.
For policymakers, developers, and technologists, Havens’ insights provide guidance on creating AI that aligns with human values and societal needs. The book encourages us all to reflect on our values and how we can influence the direction of AI, promoting a proactive approach to technology adoption.
Makes me want him to write a sequel and perhaps title it "Heartificiality."
Sources:
The Artificiality Weekend Briefing: About AI, Not Written by AI