AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
An interview with Steven Sloman, professor of cognitive, linguistic, and psychological sciences at Brown University, about LLMs and deliberative reasoning.
If you’ve used a large language model, you’ve likely had one or more moments of amazement as the tool immediately responded with impressive content from its massive data cosmos training set. But you’ve likely also had moments of confusion or disillusionment as the tool responded with irrelevant or incorrect responses, displaying a lack of reasoning.
A recent research paper from Meta caught our eye because it proposes a new mechanism called System 2 Attention which “leverages the ability of LLMs to reason in natural language and follow instructions in order to decide what to attend to.” The name System 2 is derived from the work of Daniel Kahneman who in his 2011 book, Thinking, Fast and Slow, differentiated between System 1 thinking as intuitive and near-instantaneous and System 2 thinking as slower and effortful. The Meta paper also references our friend Steven Sloman who in 1996 made the case for two systems of reasoning—associative and deliberative or rule-based.
Given our interest in the idea of LLMs being able to help people make better decisions—which often requires more deliberative thinking—we asked Steve to come back on the podcast to get his reaction to this research and generative AI in general. Yet again, we had a dynamic conversation about human cognition and modern AI, which field is learning what from the other, and a few speculations about the future. We’re grateful for Steve taking the time to talk with us again and hope that he’ll join us for a third time when his next book is released sometime in 2024.
Steven Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University where he has taught since 1992. He studies how people think, including how we think as a community, a topic he wrote a fantastic book about with Philip Fernbach called The Knowledge Illusion: Why We Never Think Alone. For more about that work, please check out our first interview with Steve from June of 2021.
The Artificiality Weekend Briefing: About AI, Not Written by AI