The Artificiality Imagining Summit 2024 gathered an (oversold!) group of creatives and innovators to imaging a hopeful future with AI. Tickets for our 2025 Summit will be on sale soon!
This week we dive into learning in the intimacy economy as well as the future of personhood with Jamie Boyle. Plus: read about Steve Sloman's upcoming presentation at the Imagining Summit and Helen's Book of the Week.
Explore the shift from the attention economy to the intimacy economy, where AI personalizes learning experiences based on deeper human connections and trust.
How might a mind for our minds help us with the paradox of knowing what we care about but allowing us to care about something for which it has no knowledge?
In 2015, Mark Zuckerberg mused on the idea of a fundamental mathematical law that governs human social relationships and ultimately codifies us. He even speculated that there may be a unifying theory of humans that can be expressed in an equation, and was willing to bet on it. But what if machines start to mathematize our most cherished values—our group and individual values?
Silicon Valley has a penchant for seeking out things that can be quantified through mathematics and machines. Mathematics excels at anything that it was instrumental in discovering, but it is not as robust when applied to things it cannot explain. The algorithms used in Silicon Valley restrict the scope of human social relationships to the kinds of values that are amenable to mathematical explanation. However, if we limit the scope of human social relationships to only what is possible to quantify, we risk creating a thin representation of what we truly care about.
The problem is that in complex human social systems, anything that is easy to quantify tends to be a very superficial representation of our values. The more we try to mathematize human values, the more we focus on what can be easily expressed through mathematical formulas, and the more we neglect everything else.
Values are hard to formulate. They exist at the level of individuals. As individuals, our values emerge and change across the span of our lives. Group values arise out of cultural processes which take time for us to cohere around, embed in institutions, and scale across society. They are constantly in flux—diversity of values arises out of the ambiguity they inherently contain. Each person, each group interprets things differently which makes it hard to control how society develops. The vagueness of goals enables different values to stay coherent, for many different paths to be trodden, and for a whole population of solutions to compete.
Conversely, if a value is made of math, it is unambiguous and precise. The advantage is that it can more easily be scaled up and propagated. This might provide certainty, reduce ambiguity, and make the world less complex. But this would be at the expense of alternative or immeasurable values that underpin human diversity.
It is evident that Zuckerberg's commercial interests at Facebook involve making humans more machine-readable. By rendering human social connections more readable, the company can exert greater top-down control over the social network, thereby shifting human agency from the periphery to the center. However, philosopher C. Thi Nguyen takes a different approach to this idea. He questions why there is often a gap between deep human values and their measurement in human social systems. Why does an inverse relationship often exist between the ease of creating a metric and the fidelity of that metric?
Nguyen suggests that values may be contingent, which is one possible interpretation of Zuckerberg's bet. It is conceivable that there is a way to mathematically describe human values, but we have not yet discovered it. With more data, smarter algorithms, and bigger machines, it may be possible to compute all human values. This represents a partial version of artificial general intelligence, as encoded values are a necessary but not sufficient component of AGI.
Alternatively, it may be that there is some “deep metaphorical reason” for this tension. In this line of argument, values resist being operationalized. For some reason, mathematics can’t be used to describe a deep human value. Values are necessarily more abstract and intangible than we can grasp using mathematics.
This flips the script. Or at least gives a different perspective on deep human values. Perhaps a deep value goes beyond just something you feel for yourself, an artifact of your own consciousness. A deep value is defined by its resistance to mathematization. A value is something that can never be completely made entirely visible, transferable, scalable, or sharable. Once it can be mathematized, it ceases to be a value. It becomes an algorithm.
When values are transformed into algorithms we become susceptible to what Nguyen calls value capture. Value capture happens when our values are swamped and overwhelmed by another group or institution. The result is that we outsource our values because it's an easier way to live. Value determination is hard work. We must wrestle with external pressures that entangle with our specific personality and place in the world. But when we are offered a handy algorithm, we can “skip out on the process of value self-determination.”
Sometimes we might want to outsource our values. For instance, my Apple Watch telling me that I've closed all my exercise rings for the day is pretty useful. As long as I can keep the metric of closing all the rings separate from my core value of living a life with lots of exercise, I've still got some autonomy. But if I give up control over the value of closing the rings, I've essentially relinquished my deeper value of living a good life on my terms. It's a particular kind of loss of control where I stop paying attention to the more ambiguous, less tangible, but deeper value. So I might decide not to go to a movie with a friend for the sole reason that I haven’t closed my rings. (Yes, people do this).
The allure of the proxy is strong. It's much easier to grapple with than the real thing. Value capture can be seductive. Values become more portable, requiring less justification and explanation. They become clearer, simpler, and more consistent. In Nguyen’s words, "A self with complex and subtle interests is one that is hard to understand. A self that has been re-engineered to value clear and simple metrics is a self that is easy to understand. And it is a self that is optimized for interpersonal coherence." You may achieve value alignment, but not value enlightenment. You become "self-determined but not self-determining."
In today's complex, hyper-connected, and increasingly fraught world, we are more susceptible to value capture than ever before. According to Nguyen, "We can gain a hedonic reward for internalizing simplified values. When we come to value a simplified goal in a non-game activity, we bring the pleasures of value clarity into the real world. Our purposes become clearer, our degree of success becomes more obvious, and our achievements become more readily rankable." With machines offering us easy versions of "who and what we all care about," much of the "existential friction of social life suddenly disappears."
As the market for self-optimization and value alignment grows, we find ourselves at a crossroads. While the demand for easier and more coherent versions of ourselves continues to soar, the supply of institutional algorithms is also on the rise. The question we must ask ourselves is: what are we really getting in exchange for outsourcing our value determination to mathematically thin versions of our true selves?
In this landscape, we risk giving our power away to tech giants like Facebook’s Zuckerberg, who seek to codify human behavior into a comprehensive algorithm. The danger of value capture is that we lose our ability to assert our unique values and care differently from what the algorithm tells us to care about.
How will the creation of new values come to pass? There are those in the field of AI who have high hopes that pure reinforcement learning protocols will lead us to discover emergent human values. Yet, this strategy comes with its own set of risks. In using an implicit value discovery mechanism to replace an explicit value definition system, do we not run the risk of merely rediscovering what we already know, with AI playing the role of the ultimate recursive pursuer? By reducing values to a static state, we may predictably predict an individual's values in the same way as recommendation algorithms foresee our buying preferences today.
The owners of social machine intelligence will use whatever tools they can to make our values machine readable and it remains to be seen if humans can maintain the enigmatic quality of our values. Can we preserve their impenetrable essence against the relentless drive of technology to decipher and control? The challenge is to keep the ineffable nature of human values, the unnameable and indescribable essence that makes them unique to each individual and resistant to algorithmic capture.
Furthermore, can we resist the temptation to seek the easy solution of quantification and instead engage in the messy and complex process of value determination? The answer may not be clear, but perhaps it is in the struggle to hold onto our deepest values that we find the beauty and wonder of the human experience, a mystery that no machine ever unravels.
Autonomy is not intrinsically self-preserving. Metrics and math that make our values easier to live make it possible to “autonomously damage one’s own autonomy”. In a world where the largest companies own the strongest AI and are increasingly shaping our social world, we can no longer assume that our values are under our control. We will need to figure out how a mind for our minds can help us with the paradox of knowing what we care about but allowing us to care about something for which it has no knowledge.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.