The Artificiality Imagining Summit 2024 gathered an (oversold!) group of creatives and innovators to imaging a hopeful future with AI. Tickets for our 2025 Summit will be on sale soon!
This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
AI's potential is vast but many applications remain incremental, simply grafting AI onto outdated frameworks. This is why we embrace complexity. We think about designing incentives, understanding self-organization and distributed control, and planning for emergence and adaptation.
AI represents a seismic shift, like discovering virgin economic territory with enormous possibility. Unfortunately, history shows we often struggle to rapidly harness such innovations. Why? A failure to completely reimagine operating models and embrace complex change.
AI has been compared with electricity for years, and for good reason. Early factories merely swapped in electric motors without changing layouts or workflows. Productivity crawled. Real gains came when pioneers like Henry Ford rebuilt processes around the new technology. Once people clicked to this change, it wasn’t just the work-process that changed, it was the entire work place. New factory layouts enhanced productivity even further. It was only then that work was revolutionized.
AI has reached a similar threshold today. Its potential is vast but many applications remain incremental, simply grafting AI onto outdated frameworks. Take the use of chatbots in customer service—the ultimate “low hanging fruit”. In many cases, these AI systems are integrated into the existing customer service framework without significant alterations to the overall process. They are often designed to mimic the way a human would handle a query, following a fixed script or set of rules. Is it cheaper and more efficient? Sure, to a point. Is it transformative, reimagining the whole concept of what it takes to satisfy a customer in the digital age? As anyone who has sat on hold with an airline will know, no, it’s not.
So the mindset must move from using AI to automate tasks to using AI to reinvent work. Make no mistake, this is hard. By definition, there are major asymmetries in what it’s possible to imagine. The transition requires a cultural and operational metamorphosis.
This is why we embrace complexity. We think about designing incentives, understanding mechanisms of reinforcing feedback, zooming in and zooming out so that it’s possible to account for multiple levels and scales, understanding self-organization and distributed control, and planning for emergence and adaptation.
Nobel Laureate Daniel Kahneman described organizations as factories which manufacture judgments and decisions. What we find continually fascinating about decision making in organizations is how difficult a decision making process is to automate. Why is this the case? Perhaps the key reason is that organizations are made up of people who have pluralistic and competing goals.
Most decision makers have tens if not hundreds of variables playing in their mind at any given time. When humans make decisions their predictions about the effect of a decision or the inherent forecast of the future are coupled with judgment. When AI makes the prediction and the human makes the judgment, this cognition is decoupled.
Decisions are the primary building blocks of systems, according to Agrawal, Gans, and Goldfarb. Before AI, decisions coupled prediction and judgment in the mind of the decision maker. Traditional AI decoupled these, leaving the human to apply judgment to the payoff (or downside) of a decision. Generative AI goes further by creating more points of both judgment and prediction. A user of a LLM, for example, applies judgment up front because the nature of their intent and their skill in prompting the LLM sets that stage for the space of possibilities that the LLM opens up. The user has to judge the output and iterate as needed.
When to disconnect is also a judgment call. Users who are less experienced gain initial productivity boosts and proficiency enhancements but then place themselves at higher risk of downgrading and also raise the complexity for others in the team who are more experienced. This is a hint of a point of leverage: put novices adjacent to experts. Up the learning cadence between novices, expert AI knowledge, and experts so that it takes less time to become an expert. Scaffold expertise by producing more "product" aka making more decisions, and adding more QA via AI.
Because decisions act as building blocks of core organizational systems, rethinking decisions allows for rethinking the system. But this is a conceptual framework not a practical how-to. What does it mean to rethink a complex system based on decisions that need to be made and the inputs to them? No clue. Turning this idea into practical reality seems to us to be a new area of research.
Large Language Models have broken up the human cognitive chain: from information gathering to prediction, deliberation, judgment, and action. In human decision-making, these aspects are entangled within our minds. Memory and bias play a key role too. AI separates information and prediction from deliberation and judgment.
This decoupling matters. Cheaper information, cheaper predictions make judgment and action more valuable. The advent of agentic AI could integrate all these steps, potentially bypassing human involvement in certain decision chains. That is indeed the goal of agentic AI: automating human decisions so that humans get to take more actions. That is, do more with less. Adapt and evolve.
“The opportunity for new system design is so great because AI creates new opportunities right down at the most fundamental level: decision composition.”—Agrawal, Gans, and Goldfarb.
Knowledge work may splinter into cognitive manual labor and judgment work, where the former is surveilled and automated and the latter is the place of flourishing and innovation.
Which side of that line do you want to be on?
💡
Interested in talking more about AI and complex change? Set up time for a chat with us here.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.