Trust: The Foundation of the Intimacy Economy

Trust: The Foundation of the Intimacy Economy, Creativity in the Age of Artificiality, A New Framework for Understanding Emergence in Complex Systems, How to Use Generative AI Part 8, and more!

An abstract image of the sun setting over the ocean
đź’ˇ
Before we begin, we will be taking next week off for July 4th. We hope you all have a great holiday weekend!

Memory updated.

When ChatGPT responds with this phrase, you’ve made a trade. You’ve provided it information about yourself and, in return, it will supposedly work better for you. You have traded trust for functionality. But what have you trusted the AI system with? How much better will it work? And do you trust the company behind the AI system enough to trade your intimate details for that functionality?

Some of ChatGPT’s memories of me are facts that I’ve disclosed before: the name of my company, our mission, people I am connected with, activities I pursue. But others I haven’t: my dreams, ideas, concerns, passions. I didn’t ask OpenAI to remember these details. And, now, as I look through the list of ChatGPT’s “memories,” I wonder, do I trust OpenAI with this information? And, if so, what will I get in return?

The usefulness of AI systems across a wide range of use cases improves with context. That means: the more you tell an AI system about the context of you, the better it can interpret the intent of your interaction and help you in your current workflow. Want help with a business problem? The more you tell the AI system about your business, the better it will be able to help you. Want help with a personal challenge? The more you tell the AI system about your personal life, the better it will be able to help you.

This trade of trusting AI systems with intimate information in return for functionality will only become more important as AI systems can manage more information. The question is: how much will people trust AI systems? And, therefore, how useful might AI systems become?

Today, the data says the AI industry has a tall hill to climb: 86% of people in the US say data privacy is a growing concern. 54% of people in the US are concerned about AI companies misusing their personal data. 30% of people in the US aren’t willing to share their personal data for any reason. Even 33% of business leaders in the US say consumers should be concerned about how their personal data is used by their own company. (See below for sources)

It’s possible that people might change their minds if the value provided in return for data is high enough. But, today, 40% of people in the US don’t trust information provided by AI systems. And, while 76% of people trust technology businesses, only 50% trust AI innovations.

Perhaps worst of all, the AI industry shouldn’t be expecting an embrace from the digital natives of Gen Z. 79% of Gen Zers say it's more important to trust the brands they use today than it was in the past. And only 34% trust large businesses, which most AI companies are or intend to be. And, no, it’s not that Gen Z doesn’t trust anyone—71% of them trust small business.

We’re obsessed with the conundrum that trust is required to increase AI usefulness, but consumer trust is low and declining in certain cases. We believe that trust will emerge as the foundation for the Intimacy Economy. And the companies that consumers trust to remember intimate details will have the significant advantage of being able to provide the most useful AI systems.

For more on trust, rewind to our research update on Trust & AI from March 2024.


This Week from Artificiality

  • Our Ideas: Creativity in the Age of Artificiality. Understanding the bow tie mechanism of cognition is a way to conceptualize a positive vision of human-AI collaboration. AI excels at compressing and denoising vast amounts of information—the left side of the bow tie. Humans excel at creative inflation and affordance recognition—the right side. But even this is a simplification for conceptualizing AI's best role. True creative collaboration recognizes the role of memory in creativity, where memory acts as a "scratch pad" for human imagination. The more dynamic your memory, the more creative you are likely to be. Similarly, the more AI can reinflate new knowledge from its vast, interconnected data, the more combinations of ideas it can present us with. We want a big, expansive bow tie, not a thin, narrow one. We think of this as the concept of Artificiality—a fusion of biological, evolved creativity with artificial, machine capabilities.
  • The Science: A New Framework for Understanding Emergence in Complex Systems. Scientists and philosophers have grappled with the concept of emergence in complex systems for a long time. Emergent phenomena are those fascinating instances where the whole becomes greater than the sum of its parts, which gives rise to unexpected behaviors and properties. Recent research has given us some new tools. By bridging concepts from information theory, computational mechanics, and complex systems science, researchers have developed a theoretical framework that, for the first time, provides a formal foundation for understanding and potentially predicting emergent phenomena.
  • Toolkit: How to Use Generative AI, Part 8: Fuse. In today's world of information overload, the ability to synthesize complex and conflicting data into actionable insights is more crucial than ever. This skill is especially vital in fields like data science, business analysis, and policy formulation, where decision-makers must navigate through vast amounts of information to extract relevant insights and make informed decisions. The process of improving synthesis involves several key strategies that help you to handle complexity with more confidence and precision.

The Imagining Summit Preview: Jonathan Coulton

We're obsessed with the question of how AI will impact creativity. We're also obsessed with the music of Jonathan Coulton. So we're thrilled that Jonathan will be joining us at The Imagining Summit to catalyze a conversation on creative synergy.

Join us in Bend on October 12-14 so you, too, can collaborate in Jonathan's session:

Creative Synergy: AI & Songwriting with Jonathan Coulton. It’s going to be the future soon (so we have to start imagining a hopeful one with AI). Prepare for a dive into the world of creativity with singer-songwriter Jonathan Coulton. Jonathan can't stop singing about AI. In this session, he'll riff on his dream AI collaborator for songwriting. Expect a highly interactive discussion (with perhaps some tunes) about the nature of creativity and what artists might want from AI.

As a preview of Jonathan's ideas on creativity, listen to our podcast with him. For more on creativity, rewind to our ideas piece on Creativity & AI, our ideas piece on The Synergy of Human Creativity and AI, our science piece on Is AI Really as Creative as Humans?, and our conversation with Jonathan Feinstein on The Context of Creativity.

đź’ˇ
The Imagining Summit will be held on October 12-14, 2024 in Bend, Oregon. Dedicated to imagining a hopeful future with AI, The Imagining Summit gather a creative, diverse group of imaginative thinkers and innovators who share our hope for the future with AI and are crazy enough to think we can collectively change things. Due to limited space, The Imagining Summit will be invite-only event. Follow the link and request an invite to be a part of this exciting event!

Helen's Book of the Week

The AI-Savvy Leader, 9 Ways to Take Back Control and Make AI Work, by David De Cremer

De Cremer is the dean of the D'Amore-McKim School of Business at Northeastern University and has spent years involves in all-things-business AI, including guiding the development of EY's AI Lab. So I was interested to read this short-and-to-the-point book about what it means to lead in this AI age.

Net-net, De Cremer has words for leaders: you aren't doing your job. 87% of AI initiatives fail. Why? "Leaders are not leading," he says. AI adoption and deployment is complex, and so is navigating the narratives around it. Leaders have to act yet that is hard to do when you don't understand what it even is.

If you've been involved in AI for a while, much of this book may feel familiar. However, if you are new to leading AI initiatives in business, you’ll find it quite valuable, especially if you are open to developing the right mindset.

What is the "right mindset"? Here, I wholeheartedly agree with De Cremer, whose insights are particularly resonant. I'd like to share a section directly: on the two fundamentally opposing perspectives around how to use AI—do you side with those who are in the endgame of making AI more and more like the human brain, or are you looking for something else?

As a leader, you have a decision to make. Which of these two perspectives do you adopt when bringing AI into your organization?
Perspective 1: AI is an increasingly cheap way to replace people and achieve new levels of productivity and efficiency.
Perspective 2: It's a powerful tool to augment—but not replace—human intelligence and unlock more innovation and creativity in workers

Everything tees off from here. As a leader do you understand:

  • how to frame AI and recognize that your technologists don't?
  • that your employees are your first stakeholders in AI?
  • that there is no efficiency without time for reflection and learning (aka down time)?
  • that real augmentation is hard work but arguably better than over-biasing automation, which results in de-skilling and diminishes the power of human intelligence?

This book isn't just a clarion call to leaders; it's also for those responsible for fostering human flourishing in organizations, such as learning and development groups. It provides a foundation to build your business case and to articulate the importance of integrating human creativity with machines, emphasizing the vital role of human intelligence in the workplace.


Facts & Figures about AI & Complex Change

  • 76%: Percentage of people trust technology businesses but only 50% trust AI innovations (Edelman)
  • 30%: Percentage of people who embrace AI innovation while 35% reject AI innovation (Edelman)
  • 86%: Percentage of people in the US say data privacy is a growing concern (KPMG)
  • 68%: Percentage of people in the US are concerned about the level of data being collected by businesses (KPMG)
  • 30%: Percentage of people in the US aren’t willing to share their personal data for any reason (KPMG)
  • 33%: Percentage of business leaders in the US say consumers should be concerned about how their personal data is used by their company (KPMG)
  • 62%: Percentage of people in the US are moderately to very concerned about companies using AI (Qualtrics XM Institute)
  • 54%: Percentage of people in the US are concerned about AI companies misusing their personal data (Qualtrics XM Institute)
  • 40%: Percentage of people in the US don’t trust information provided by AI systems (Qualtrics XM Institute)
  • 73%: Percentage of people who are worried that the data brands collect will not benefit them, and instead will only be used to help the brand (Adobe)
  • 81%: Percentage of consumers who say that having choices about how companies use their data is important (Adobe)

Sources:

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.