Does it matter where our thinking is being done? How should we as individuals think about outsourcing more of our cognition to AI?
Daily ChatGPT users grasp how profoundly this tool has become an extension of thought. Unlike any preceding technology, Large Language Models interface so fluidly they reshape cognition itself. Our ability to “sense,” "think," "learn," "decide" and "act" no longer reside solely within biological minds but rather intertwine with this external artifact.
But how do we arrive at insights, judgments, and ideas when we outsource our thinking to ChatGPT? And if we outsource our deliberative thinking—memory, search, calculation—do we outsource more than we know? Do we relinquish more of mind than is good for us?
If you want to know the answer to the question, "where is my mind?" Andy Clark and David Chalmers offer an answer: it's not confined solely to your brain or body. Their 1998 "extended mind" thesis argues that minds can sometimes stretch beyond our biological boundary, inhabiting external objects that support thought and cognition.
The thinking self is not skull-bound. When we use environmental supports like notebooks or smartphones as aids for cognition and memory, they morph into places where our minds can make use of other places and spaces. So while we often envision minds as imprisoned by anatomy, these philosophers propose cognition has capacity to seep outward.
But policing this border between brain-based and extended cognition is difficult. Some technologies make us better thinkers as well as better performers. Others do not, even as they enhance our performance. Santa Fe Institute's David Krakauer categorizes such technologies as cognitive artifacts. They come in two types: complementary and competitive.
Complementary tools, like maps, leave mental traces that endure after removal. They enrich our abilities. Competitive ones, like GPS, atrophy innate skills by replacing cognition wholesale. If we lose the device we remain lost, having forgone developing the underlying talent.
Krakauer argues most modern artifacts trend competitive—we outsource remembering and reasoning to avoid such difficult work. And there’s a mighty catch: memory is not just a passive storehouse of data. It's an active, dynamic scratch pad. This mental workspace enables us to take all our accumulated knowledge and experiences, and creatively recombine and restructure them. Memory exists, not to record the past but to predict the future.
Therefore, the richer and more robust our memory, the more sophisticated and innovative our ideas can become. Outsourcing memory and reasoning might ease immediate cognitive loads, but it potentially diminishes our ability to generate original, insightful ideas, as it deprives us of the intricate mental processes essential for creative thinking.
Ask yourself: Do these tools give more than they take? The answer will depend on how you use them. I know people, myself included, who will consciously choose when to not use navigation apps. Some of the time I want to know I still have a sense of direction.
As companies seek to automate more work, many people won't have the choice themselves. For example, if you're an Uber driver you a different choice landscape. Uber wants navigation to run non-stop to shave minutes off each ride. Passengers want transparency and predictability—a standard which has emerged alongside the technology. The predictability of rideshare apps utterly spoils us compared to the unpredictable variables of hailing a cab. It's hard to imagine reverting to a time when each driver's whims determined your route.
You have to think about generative AI systems as potentially being turn-by-turn directions for all your cognition. But it doesn't necessarily follow that you should follow every instruction. If AI is designed to make life easier, we have to get more skilled at identifying when life should be harder.
How will using AI impact how motivated we are? We know that people who are happy to seek discomfort achieve more: they take more risks, engage more, and open themselves to facing new and uncomfortable information.
Feeling uncomfortable is a sign that you are learning. By actively seeking that uncomfortable feeling—rather than trying to avoid it—you learn more. An extended mind type of metacognition can speak to you at the optimal moment: "hey, remember that moderate emotional discomfort is a signal that you’re developing as a person." Perhaps a complementary artifact can be a metacognitive AI that nudges you to be aware that feeling uncomfortable is a signal you are learning before you even know you are learning.
Along with learning how to prompt and engage with AI we also need to learn to be alert for how it changes motivation. You might begin to notice that the process of back-and-forth prompting can help you stay in a problem longer, keeping you curious for longer. Or you may find yourself going around in circles, probing the machine in the hopes that a magic insight happens. At that point, ask yourself if it's because you're using AI more as a crutch than a partner, indulging in a kind of AI-induced wishful thinking. A competitive artifact might arise: we may accept a mind for our mind that shows us when we over-rely on AI.
Today AI remains something we outsource our thinking to as an external extended mind. But we may not be far from a future where the extended mind shifts to be internal. Neural implants are developing rapidly—it's astonishing that machines can be placed in people's brains and interpret and react to thoughts. People who are paralyzed or otherwise locked into their minds can now communicate via these advanced neural implants.
It's still early days, but it's conceivable to envision a mind for our minds that is embedded and integrated with the wetware of our brains. This raises a myriad of questions and possibilities. Would an extended, internal mind serve as a competitive or complementary artifact to our existing cognitive capabilities? Could we have the agency to determine how it might enhance our cognition, defining the boundaries of what we'd be left with if it were ever removed? Moreover, would we even possess the ability to discern the extent of its influence on our thought processes?
In the future, as AI advances and becomes more seamlessly integrated with our minds, the boundaries between our biological cognition and artificial enhancements may become increasingly blurred. This raises profound questions about the nature of the self, agency, and identity. Will we still consider ourselves as singular, autonomous entities, or will we embrace a more fluid and distributed notion of the self that encompasses both our biological and artificial cognitive components?
The future of the extended mind is not a fixed destination but an ongoing journey that requires our active participation and reflection as we redefine what it means to be human in an age of artificial intelligence.