A Mind for Our Minds

Generative AI changes how we should think about designing machines. How might we design AI to be a mind for our minds?

An abstract image of a mind

Until recently, we rarely needed to consider an intelligence beyond the biological intelligences we know. Digital intelligence was complicated but not inscrutable. Algorithms were logical and intelligible, albeit by experts. Data was intuitive in scale and interpretable, even if it required a statistician to make sense of its meaning.

A computer was a tool—a bicycle for our minds as Steve Jobs famously said in 1980. Steve’s metaphor was based on a study in Scientific American that showed a dramatic increase in movement efficiency when a human was on a bicycle rather than walking alone. Even in the early days of computing, Steve recognized the powerful increase in efficiency that a computer could have for the human mind.

For nearly 40 years, computers have been these efficiency-increasing machines, following the directions of humans, doing exactly—exactly—what a human asked them to. Humans precisely steered their computers to a prescribed outcome. Even if a request was complicated and highly expert, the machine’s output was always reducible to something rule-based, comprehensible, and controllable. Humans changed code and released new versions. Machines did not change their code under their own volition. But now we have artificial intelligence—machines that learn on their own from data whose scale is beyond our comprehension. Machines are no longer controlled solely by humans—they are minds themselves.

In the words of designer Josh Lovejoy, in the past, machines took our rules and created data. But now, machines consume data and create rules. Because modern digital data is beyond what our brains evolved to understand, our intuitions are woefully inadequate in a world of learning machines. Humans evolved in a world where more information made our intuition better. But now, in the age of machines that learn and decide on their own, more information can make our intuitions capricious.

It’s not that we have no intuitions about machines—clearly we do. We have intuitions about whether Siri is correct when it points us to something on the web. We can spot when data seems to contradict our personal experience. We can be curious about a chart that doesn’t look right or an anomaly in data. Curiosity is a basic driver for humans so we should expect data to make us curious and invite us to question.

However, here we encounter a pernicious cognitive bias. We can over-rely on information from a machine even when real-world evidence contradicting the machine is in front of us. It might be rare that someone dies from following their GPS when they shouldn’t but there are numerous instances of people driving into lakes and rivers while following GPS navigation systems.

People can feel strong emotional connections to a humanoid robot, or a partial robot, such as ChatGPT. Generative AI’s ability to create human-like text and reality-like images tricks our evolved intutions, creating stronger bonds than we might expect. Just as we have an intuition about trusting another person, we form intuitions about trust in AI. We call this machine intuition—how good our intuition is for the output of a machine. If the machine was a person, could we, would we, trust it?

Humans use many techniques to outsource our thinking. Our cognition is not trapped inside our own brains. We connect to others, we use tools, and we use our bodies to think. We use data to slow us down and focus our attention. Learning machines can be a mind for our minds. But we must build new intuitions for how this intelligence works with us. We need to have confidence that we aren’t going to be led astray, overuse intuition, indulge in wishful thinking, manipulate or be manipulated.

Ultimately, we want to preserve the opportunities for our future selves. Datafication of our social world that feeds machines which predict our preferences will make us predictable. If everything is predictable, humans won’t need to work with each other to build a shared vision or tackle the unplanned. If all the answers were in the data, we could abandon human judgment—as flawed as it can be—and let the machines make all the decisions. But this would be a dystopian future.

Good machine intuition isn’t only about whether to trust that Google Maps has given the right directions or whether Alexa chose the best product. As AI becomes more ubiquitous, good machine intuition is vital for understanding whether the correct dataset has been used to perform analysis in a product design task or whether a dataset fairly represents traditionally underserved groups. It includes asking not just can we use machines but should we.

We have to internalize that our natural intuitions are likely to be wrong when data is large, opaque, and complex. We will think we know more than we do. Our sense of expertise can be out of sync with reality. We have no instinct to detect when we are overextended. Worse, our traditional mechanisms of alert—feedback from others—can’t work when everyone else is in the same boat. We will latch on to a compelling story rather than one that is true. Good decisions and actions rely on the ability to faithfully translate data into a story, a craft that is deeply dependent on the judgment of the human narrator.

We can—indeed must—develop an intuition for the machine. Without an intuition for machine outputs, human judgment may become so unreliable, imprecise, and flawed we will have no choice but to abandon it. Yet we are stuck—unless we understand the intent of machines, we have to unpick what the designer wants us to choose and why before we can decide.

We collaborate with other humans naturally. A group of humans can tap into the specialties of the individuals through evolved rituals or culture. We tell stories, we engage in common practices, and we have established cultural norms to share, challenge, coordinate, and compromise.

We can take these social skills and rituals for granted. We just do them. It is easy to look past this collective power when attempting to compare human intelligence and artificial intelligence. We frequently hear about the number of neurons in a brain or model or the speed of data transfer. But we rarely hear about the mysterious and magical ability of human’s collective intelligence.

It is this collective intelligence that has driven human success. And it is this collective intelligence that we must amplify rather than mute with artificial intelligence. Computers that were steerable like bicycles can be precisely directed as tools for an individual or a collective. These computers do what we ask and only what we ask. But, now, our AI systems have minds of their own—seeking new data, synthesizing novel connections, reasoning freely, and generating new content to be consumed by humans and machines alike. These AI systems are no longer bicycles for our minds because they aren’t the powerful, yet simpler, tools of the past.

In this series, we hope to explore: How might we design AI to be a mind for our minds? How might we design AI to activate the collective intelligence of humans andmachines? How might we design AI to be a collaborator and contributor to our individual and collective minds? How might we design AI to amplify our collective power as a species?

Join us as we explore how we might design a mind for our minds for collective intelligence, meaning, uncertainty, umwelt, trust, values, vulnerability, creativity, consciousness, agency, connection, learning, design, attention, metacognition, and much more.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.