Learning, the Intimacy Economy, and the Future of Personhood

This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.

An abstract image representing personhood

This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.


Our Research: Learning in the Intimacy Economy

The attention economy, where companies monetize users' time and focus on digital platforms, has become a familiar concept. AI algorithms curate personalized content feeds and power the recommendation systems, keeping users engaged based on their online behaviors and inferred preferences.

This model of personalization is limited by the data we consciously provide, yet also relies on inferences drawn from information we might not even realize we are sharing. At times, the “people like you” style inferences accurately predict preferences by drawing on patterns of behavior observed in similar users. But often these inferences reveal how little current technology truly understands us, as we expose only a fraction of ourselves online. That is all about to change. 

Conversational, generative AI changes our relationship with technology. By enabling genuine interaction, it allows us to form a relationship with technology. We can converse, learn, and co-create with AI. It can help us solve problems—even our most personal problems if we tell it enough about ourselves. 

This shift opens the door to what we call the Intimacy Economy. In this new paradigm, humans trade intimate information for enhanced functionality. The more context we provide to the AI about ourselves, the better it can interpret our intent and assist us. This deep understanding—and the trust it requires—will form the foundation of the Intimacy Economy. The trade-off is clear: your personal information in exchange for functionality. The more you tell an AI system about the context of you, the better it will be able to interpret the intent of your interaction and help you in your current workflow. This exchange is the foundation of the Intimacy Economy and hinges on evolving concepts of trust and privacy.

Intimacy will center around our needs, goals, motivations, and desires, both explicit and implicit. As AI evolves to perform tasks and exhibit traits once considered "uniquely human," such as understanding complex emotions, creating art, and making decisions, it will reshape our interaction with technology. For instance, imagine an AI assistant that schedules your day. It starts by considering your explicit inputs—your work meetings, deadlines, and personal appointments. But over time, it begins to recognize implicit patterns, like when you're most productive, how you feel after long meetings, or when you tend to skip tasks. Eventually, the AI suggests not only when to schedule meetings but also recommends taking a break before difficult tasks, or even rearranging your schedule based on your mood and energy levels. As it becomes more attuned to your needs, the AI transitions from simply being a tool to feeling like a deeply personalized assistant, influencing how you structure your day in ways that blur the line between functionality and intimacy.

Crucially, machine intelligence is also transforming our understanding of learning itself. AI-enabled learning technologies not only create educational tools but also provide insights into human cognition and intelligence. This symbiotic relationship between machine and human learning is particularly significant in the context of the emerging Intimacy Economy, where users expect deeply personalized experiences. The success of AI in enhancing and personalizing learning over time could be the proving ground for the Intimacy Economy's broader applications.

For instance, at the simplest level, education-oriented AI will automate tasks like grading quizzes or offering personalized practice exercises based on a student's performance. As it expands, AI-powered platforms may provide adaptive learning experiences, offering real-time feedback and tailoring lessons to individual learning styles. Looking further ahead, AI might evolve into a lifelong learning coach—tracking progress, suggesting new areas of study, providing mentorship, and offering personalized guidance throughout a person’s education and career.

However, as AI becomes more useful, the trade-off between intimacy and functionality becomes more problematic. To be effective, a lifelong learning assistant would need to deeply understand not just a student’s academic performance, but also their personal goals, habits, and emotional states. The more data it collects to enhance its usefulness, the more it challenges our concepts of privacy, raising concerns about how much personal information we're willing to share in exchange for such a tailored experience. For many users, who programs and controls the AI may shape how willing they are to trust it.

But what will it mean to learn in the intimacy economy? To answer this, we need to consider three angles: the nature of intimacy with machines, how this might change learning, and the emerging needs and preferences of a new generation. 

Read more in our full report on Learning in the Intimacy Economy.


Conversations: James Boyle and The Line: AI and the Future of Personhood

We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood.

Listen on AppleSpotify, and YouTube.

In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality.

Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness.

What's particularly compelling about Jamie’s approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights.

The Line is Helen's Book of the Week and, as she says in her review, there are some interesting surprises—namely, you might assume that the legal concept of corporations as persons was carefully thought out, but you’d be wrong.

Instead, the idea that corporations are “persons” under the law wasn’t a carefully planned decision, but rather a messy accident—a result of legal shortcuts and historical quirks. The Citizens United decision, though widely condemned as a radical expansion of rights, is just another chapter in a 135-year saga where corporate rights have been inconsistently justified and repeatedly contested. We don’t have a coherent philosophy or solid legal grounding for why corporations have ended up with these rights. This should make us wary when thinking about extending personhood to AI. If corporate personhood happened almost by mistake, the stakes are even higher when deciding if and how artificial beings fit into our constitutional framework. The message is stark—we can’t afford to be naive about how our current standards came to be shaped by history.

This book is fun to read—Jamie has a distinctive style and humor that makes for many laughs along the way. He’s got a talent for crafting catchy phrases—take "sentences don’t imply sentience" when talking about LLMs. We catch ourselves saying it like it was our own!

The idea that resonates most was realizing how our definitions of who qualifies as "human" hinge on an unstable balance between empathy and pragmatism. This suggests that our classifications will always be influenced by our subjective experiences—whether something feels repulsive or evokes empathy, or simply because it’s more convenient to categorize it based on legal or practical needs, like whether we want to be able to sue it.

We believe this book is both ahead of its time and right on time. How? Because it sharpens our understanding of concepts that are difficult to grasp—that the boundaries between the organic and synthetic are blurring, and this shift will create profound existential challenges that we need to start preparing for now.

We'll leave you with our favorite quote from The Line, it captures the enormity of the transformation we might face when confronting synthetic beings, framing it as a profound existential moment, a challenge that is both urgent and transformative:

"Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe." 

The Imagining Summit Preview: Steve Sloman

Steve Sloman: The Community of Machines: Knowledge and AI

Steve Sloman, professor at Brown University and author of The Knowledge Illusion, will catalyze a conversation about how we perceive knowledge in ourselves, others, and now in machines. What happens when our collective knowledge includes a community of machines? Steve will challenge us to think about the dynamics of knowledge and understanding in an AI-driven world, about the evolving landscape of narratives, and ask the question can AI make us believe in ways that humans make us believe? What would it take for AI to construct a compelling ideology and belief system that humans would want to follow?

Check out the agenda for The Imagining Summit and send us a message if you would like to join us. We're excited to meet our Artificiality community in person!

💡
The Imagining Summit will be held on October 12-14, 2024 in Bend, Oregon. Dedicated to imagining a hopeful future with AI, The Imagining Summit gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things. Due to limited space, The Imagining Summit will be invite-only event. Follow the link and request an invite to be a part of this exciting event!

Where in the World are Helen and Dave?

Select upcoming events where we'll be—join us!

  • The Imagining Summit: October 12-14 in Bend, OR. The Imagining Summit will gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things.
💡
Interested in us presenting a keynote or workshop for your organization to help you navigate the new worlds of AI and complex change? Set up time for a chat with us here.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.