The Artificiality Imagining Summit 2024 gathered an (oversold!) group of creatives and innovators to imaging a hopeful future with AI. Tickets for our 2025 Summit will be on sale soon!
This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
The development of AI has often drawn inspiration from discoveries in neuroscience and cognitive science, creating a synergy between understanding biological and artificial intelligence.
Metaphors help elucidate AI concepts but risk oversimplifying the vast complexity of possible minds. Equating brains and AI can bound our conception of intelligence.
Through the act of engineering AI, we continue to find mirrors reflecting back truths about our own minds and intelligence. But we must tread carefully when extrapolating between carbon and silicon.
The collaboration between neuroscience, cognitive science, and AI promises continued revelations as we create machines that think and act increasingly like ourselves. But we must remain open to intelligence taking wildly diverse forms.
Physicist Richard Feynman famously said, “what I cannot create, I do not understand.” This concept has played an important role at the intersection of cognitive sciences and AI. It has a history and a notable synergy. As humans make machines that think more like us, we inadvertently find clues that mirror the complexities of our own minds.
Sometimes this process is metaphorical. Researchers on one side may take inspiration from the other. The artificial neural network was inspired by the biological neural networks in our brains. Neurons receive, process, and transmit information. Artificial neural networks adopted this fundamental concept, allowing AI to learn from vast amounts of data. While the simplistic models of artificial neurons are a far cry from the complexity of our brains, with their hormones, neurotransmitters and glial cells, the idea stemmed from our understanding of neurobiology.
Sometimes creativity comes from direct application of an insight. The discovery of dopamine's role in the brain's reward system has had a significant influence on the development of reinforcement learning. Dopamine neurons fire when an unexpected reward is received, which aligns with the "reward prediction error" concept in reinforcement learning algorithms. In essence, these algorithms adjust their predictions based on the difference between the expected and received reward. This is analogous to how dopamine levels fluctuate in our brain. Reinforcement learning has become an important paradigm in machine learning.
Interdisciplinary collaboration is highly fruitful. We like to follow those researchers who are fluent in the dual domains of cognitive and computer science. Their work enlightens us in a two-for-one kind of way. We see how AI reflects us and what we want (or not) in AI. The gap between us is conceptually straightforward but the technical challenge to close it, well that’s a different story. It’s at this nexus that we find intriguing hints at what to look for in research directions.
Metaphors can help us understand complex topics because key concepts “come along for the ride” when we use a good metaphor. But they can also oversimplify and mislead us. Describing brains as "prediction machines" highlights their Bayesian capacities but it also risks overlooking other evolutionary intricacies—brains are far more than a network for statistical learning. And while patterns in neuroscience do enlighten AI design, equating the two overlooks the vastness of possible minds.
The stakes have risen significantly. Researchers now seriously talk of conscious machines, even the possibility of building subjective experience into AI. Machines that can feel joy but also can suffer. For Transhumanists, the key to unlocking a superior consciousness and undying existence lies in merging carbon and silicon-based intelligences. This is next level stuff: we’re not sure if it is the most genius or the most ghastly idea in AI.
Humans make decisions based on feelings. No matter how rational we might want to think we are, feelings come first. It may well be that consciousness arose as an evolutionary adaptation to make decisions under uncertainty and in ambiguous situations—a short cut to action. A crafty researcher could reasonably extend the analogy and see consciousness in machines as a short cut to agency. Why not? Seems plausible.
We're stepping into unknown territory. The collaboration between AI and neuroscience might prove to be part of how we come to understand consciousness. While we have some theories about how biology gives rise to subjective experience there is no leading one yet. It’s possible that the same people currently working on AGI end up creating artificial consciousness. They don’t know enough to know how not to create it.
As Feynman rightly emphasized, the act of creation is, in itself, a profound means of understanding. AI mirrors our intelligence and us the same. We are obsessed with what might next be visible in the mirror’s reflection.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.