The Intimacy Surface

The Intimacy Surface, How to Use Generative AI, Part 9: Reflect, John Havens: Heartificial Intelligence, Improve Your Prompts with Many-Shot In-Context Learning, The Imagining Summit Preview: Jamer Hunt, Helen's Book of the Week, and Facts & Figures about AI & Complex Change.

The Intimacy Surface

Generative AI has hit an interesting moment. It's hard to dismiss the power of these new systems. But plenty of people wonder, "Is this it?" Yes, these systems can perform impressive tasks but they are also leagues away from their promised, future usefulness.

The dominant narrative from Silicon Valley is that more compute and more data will scale performance to reach the elusive, fantastical state of AGI that will be able to do everything for everyone. Secondary narratives include new model structures or training techniques that improve upon the current architectures, processes, and technologies. The problem with all of these narratives is that they miss the point of what is required to deliver what a user truly needs—systems with a deep and intimate understanding of them.

Our view is that the future of AI—and technology more broadly—is the Intimacy Economy. As we have had to pay with our attention to make the most of the Attention Economy, we will need to pay with our intimate information to make the most of the Intimacy Economy. Why? Because the more you tell an AI system about the context of you, the better it will be able to interpret the intent of your interaction and help you in your current workflow. This deep understanding of each of us by machines—and the required trust in those machines—will be the foundation of the Intimacy Economy.

This is where Silicon Valley's dominant path of bigger, global models fall short. More world data won't necessarily make AI systems work better for me. AI systems need to know more about me to work better for me. This shouldn't be much of a surprise. A human assistant can only assist me if they understand the nuances of my priorities, needs, and desires. A human teacher can only help me learn if they understand my current knowledge and my learning goals. We understand these human requirements intuitively. But how can we think about applying them to machines?

We think of this as the Intimacy Surface. We use the word "surface" in this context to mean a dynamic, multidimensional interface between humans and machines. It's not necessarily limited to the physical or digital interface, but a conceptual space that includes all of the contact between humans and AI. It's a malleable, responsive surface that can shift, allowing the user to choose the level of intimacy based on the nature of the interaction. The Intimacy Surface adapts to the user's level of trust and willingness to disclose as well as their needs and desires in context. This surface is highly sensitive to emotional resonance, contextual understanding, and mutual trust, allowing for deeper engagement, not just functional interaction. It is characterized by its ability to facilitate increasingly meaningful, personalized, and impactful exchanges.

The Intimacy Surface is composed of five key dimensions:

  • Connection: The ability of the AI system to engage with users in a natural, effortless manner, understanding and responding to emotions and contextual cues seamlessly. Imagine AI systems that can anticipate needs before they're even expressed, providing support that feels both timely and supportive.
  • Metacognition: The ability of an AI system to enhance our metacognition—our ability to think about our own thinking. This involves AI systems helping us to assess our knowledge state, identify gaps, and understand our own cognitive processes. Imagine an AI system that can remind you of your previous reactions in similar situations or help prepare you for upcoming challenges based on past behavior.
  • Mindfulness: The AI system's capacity to promote and sustain user awareness of their emotions, the people around them, and their environment. The aim here is to increase a user's mindfulness and intentionality. Imagine an AI that could suggest activities based on what it understands about your current mood and interests, enhancing your engagement with the world around you.
  • Meaningfulness: The AI system's ability to facilitate personal growth, creativity, and self-actualization, helping users realize their potential and align with their deeper purpose. The AI system can help balance various tensions: long versus short term goals, for instance. Imagine an AI system that can help us discover emergent values as human beings seeking authenticity and purpose in our lives.
  • Trust: The capacity of AI systems to build, maintain, and deepen a sense of trust with users through reliable performance, appropriate vulnerability, and mutual growth. Imagine AI systems that are trusted partners and are transparent about their capabilities and limitations.

I wouldn't be surprised if you find my "imagines" far-fetched. You might think that these imagines are incredibly difficult to deliver. Perhaps you think they will forever be in the land of science fiction or dreamlandia—and that might be true. But the purpose of imagining like this is to imagine something that we might want to help direct how technology is developed. Because if we can imagine it, we have a shot at creating it. In fact, Helen goes as far as saying that, given this technology is coming at us like it or not, the only responsible thing to do is start some hardcore imagining!

The stakes of getting this right are enormous. A well-designed Intimacy Surface has the potential to be truly helpful and useful. Imagine an AI therapist who never tires, never judges, and remembers every detail of your life story. Or a digital tutor that can understand your interests to craft a learning journey that helps you explore the world. These AI systems could provide personalized support in ways we can scarcely imagine.

But the risks are equally significant. A world where AI knows us better than we know ourselves is a world vulnerable to unprecedented levels of manipulation and control. Whoever controls the most intimate data about the most people would wield extraordinary power. As George Orwell warned, "Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing." Are we inadvertently creating the tools for such power, handing our intimacy to whoever promises enhanced productivity?

I believe we would do well to remember the words of Martin Heidegger, who warned against the "enframing" power of technology—its tendency to reduce everything, including humans, to resources to be optimized and exploited. The Intimacy Surface offers us unprecedented tools for self-knowledge and growth, but we must ensure that in our quest for digital intimacy, we don't lose touch with the very thing that makes us human: our capacity for authentic choice and meaning-making.


This Week from Artificiality

  • Toolkit: How to Use Generative AI, Part 9: Reflect. When the pace of technological advancement and information overload challenges our cognitive capacities, we have to develop our metacognitive skill. The art of reflection has become an invaluable tool in sharpening our cognitive skills and enhancing self-awareness. This process of introspection with AI offers the opportunity to refine our thinking, challenge our assumptions, and give us a deeper understanding of the state of our own knowledge. Learn strategies for critical thinking, confidence calibration, and effective communication. Discover how to leverage AI for personal growth, from avoiding confirmation bias to planning inclusive events. This post offers practical prompts and techniques to elevate your metacognitive abilities in our fast-paced, technology-driven world.
  • Conversations: John Havens: Heartificial Intelligence. We're excited to welcome to the podcast John Havens, a multifaceted thinker at the intersection of technology, ethics, and sustainability. John's journey has taken him from professional acting to becoming a thought leader in AI ethics and human wellbeing. In his 2016 book, "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John presents a thought-provoking examination of humanity's relationship with AI. He introduces the concept of "codifying our values"—our crucial need as a species to define and understand our own ethics before we entrust machines to make decisions for us. Through an interplay of fictional vignettes and real-world examples, the book illuminates the fundamental interplay between human values and machine intelligence, arguing that while AI can measure and improve wellbeing, it cannot automate it. John advocates for greater investment in understanding our own values and ethics to better navigate our relationship with increasingly sophisticated AI systems. In this conversation, we dive into the key ideas from "Heartificial Intelligence" and their profound implications for the future of both human and artificial intelligence. 
  • The Science: Improve Your Prompts with Many-Shot In-Context Learning. Discover how to leverage Many-Shot In-Context Learning (ICL) to dramatically improve your AI interactions. This post discusses Google DeepMind's research on enhancing large language models' performance through extensive example-based learning. Learn five key strategies to optimize your prompts, including increasing example quantity, utilizing domain-specific inputs, and incorporating model-generated rationales. Understand the potential and limitations of Many-Shot ICL in everyday AI use, and gain practical insights to boost your AI prompt engineering skills.

The Imagining Summit Preview: Jamer Hunt

In October, we will undertake a bold idea—imagining a hopeful future with AI. But who is that hopeful future for? Individuals? Organizations? Society? The planet?

Our opening discussion with Jamer Hunt will help answer these questions. Jamer, author of Not to Scale and inspired by the Eames's film Powers of Ten, will catalyze our opening discussion on the concept of scale. This session will delve into how different scales—whether individual, organizational, community, societal, or even temporal—shape our perspectives and influence the design of AI systems. By examining the impact of scale on context and constraints, Jamer will guide us to a clearer understanding of the appropriate levels at which we can envision and build a hopeful future with AI. This interactive session promises to set the stage for a thought-provoking conference.

Check out the agenda for The Imagining Summit and send us a message if you would like to join us. We're excited to meet our Artificiality readers in person!

As a preview of Jamer's ideas on the power of scale, listen to our podcast with him.

💡
The Imagining Summit will be held on October 12-14, 2024 in Bend, Oregon. Dedicated to imagining a hopeful future with AI, The Imagining Summit gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things. Due to limited space, The Imagining Summit will be invite-only event. Follow the link and request an invite to be a part of this exciting event!

Helen's Book of the Week

Heartificial Intelligence: Embracing Our Humanity to Maximize Machiness" by John C. Havens

Despite being published in 2016, this book remains an essential read. As AI shapes our collective future, "Heartificial Intelligence" offers a comprehensive understanding of the ethical challenges and responsibilities we face. Havens emphasizes the importance of incorporating ethical considerations into AI, a topic that has only grown in significance. Havens uses fictional vignettes to explore potential future scenarios involving AI, prompting readers to consider their responses to complex ethical dilemmas. These scenarios remain relevant as they provoke critical thinking about the trajectory of AI and I, for one, appreciate the foundations he presents on important topics such as privacy, data security, and the emotional impact of AI decisions.

For policymakers, developers, and technologists, Havens’ insights provide guidance on creating AI that aligns with human values and societal needs. The book encourages us all to reflect on our values and how we can influence the direction of AI, promoting a proactive approach to technology adoption.

Makes me want him to write a sequel and perhaps title it "Heartificiality."


Facts & Figures about AI & Complex Change

  • $1 trillion: The forecasted capex spend on AI technology (Goldman Sachs)
  • 0.5%: MIT professor Daron Acemoglu’s forecasted increase in US productivity from AI in the next decade (Goldman Sachs)
  • 9%: Goldman Sachs senior global economist Joseph Briggs’ forecasted increase in US productivity from AI in the next decade (Goldman Sachs)
  • 8%: Percentage increase in the novelty of stories experienced by writers using generative AI (Science)
  • 9%: Percentage increase in the usefulness of stories experienced by writers using generative AI (Science)
  • 1: The priority of “job security” in 2023, up from #8 in 2014 (BCG)
  • 70%: Percentage of people who believe their jobs will change as a result of generative AI (BCG)
  • 25%: Percentage of people who believe their job will not be affected as a result of generative AI (BCG)
  • 5%: Percentage of people who believe their job will no longer exist as a result of generative AI (BCG)
  • 33%: Percentage of people who use generative AI regularly for work use (BCG)
  • 35%: Percentage of people who use generative AI regularly for personal use (BCG)
  • 57%: Percentage of people who are open to reskilling because of generative AI (BCG)
  • 35%: Percentage of people who are open to reskilling because of generative AI only if they have serious difficulties (BCG)
  • 8%: Percentage of people who are not open to reskilling because of generative AI (BCG)

Sources:

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.