The Paradox of Personalization

Paradoxically, the prediction of our future selves reduces our ability to freely find those selves. The younger we are, the more pernicious this effect may be. Perhaps the ultimate protection we can give our kids is the right to figure themselves out before an AI does it for them.

An abstract image of a person and a clock

After last week’s post, which included a an article on How to Personalize and Not Over-personalize, a number of readers commented that they’d like to read more on this idea. We’re also experimenting with a podcast which will come through as an email mid-week. The idea is to discuss this week’s subject in more detail, so send your questions and thoughts in by Tuesday (Wednesday for kiwis). Replying to this email will do the trick.

At the heart of personalization, lies a natural tension - AI’s strength is predictability while humanity’s strength is unpredictability. It’s not an easy paradox to resolve.

AI makes predictions. Predictions are valuable - especially when predictions involve human behavior. In User Friendly, Kuang and Fabricant discuss how the most valuable raw material in product design is not glass or steel or plastic, it is human behavior. In Surveillance Capitalism, Zuboff details the perils we face with the capture of human “behavioral surplus,” which is used as the raw material for predictions, and monetized by AI-powered platforms. The trade in human futures is how Google and Facebook make their billions.

AI’s promise is personalization - delivering frictionless experiences that make our lives easier. We can offload cognitive, emotional or moral tasks to AI. For this we need AI that can predict our behavior. AI’s ability to predict human behavior is more than most people realize. AI can detect signals in our online actions that are beyond our comprehension, then turn these into patterns that are highly predictive.

AI can also influence our behavior at a subconscious level because our preferences are often formed by associative learning without us being consciously aware. This means if AI is able to predict our future preferences, it has the capacity to manipulate them. With digital technologies that operate beyond our awareness or our ability to conceptualize the data, these insights are available to a machine but not available to us. So when it comes to determining our future selves, AI may have the upper hand.

Here is the fundamental paradox with personalization: AI optimizes so that humans become more predictable, but our very human-ness revolves around our ability to be unpredictable. We even rely on unpredictability to experience connection with others. If everything was predictable, we wouldn’t need to work with each other to build a shared vision or tackle the unplanned.

In other words, AI’s efficiency goal is in conflict with human agency.

In democratic societies, agency is a central value. This brings a lot of messy inefficiency. Society and its institutions don’t actually optimize. It sounds counterintuitive but the social value of leaving a wide range of opportunities open for the future generally exceeds the value that society could realize by trying to optimize in the present. Our default in the US is to leave things undetermined. We need to play, to explore, to be unpredictable and to have unpredictability.

This doesn’t mean we want to be unpredictable all the time or not outsource some of our thinking and actions. It doesn’t mean that we can’t use technology to help us reach our personal goals. But it does mean that we have a fundamental conflict with AI and the commercial incentives behind any AI which is able to profit or self-deal as humans are made more predictable. One of the inequalities that AI introduces is how much we know about ourselves versus how much others know about us.

The real advantage the AI has is knowing us better than we know ourselves and offering us things in the moment that predictably steer us in those directions, rather than ones we may need to discover for ourselves. This is the market in human futures, where machine learning is prioritized over human learning and the real bias is a bias against humans.

This is not the same as old-school advertising or traditional technology. AI is different because it is role-creating. It learns from data, it creates its own knowledge that humans struggle to understand, it interferes with our agency in ways we can’t detect and it may be working towards an objective that is contrary to ours.

That doesn’t mean it’s not useful and helpful, nor that we can’t use it to better ourselves and our societies. But it does mean that we need to make sure we keep machines biased towards humans - that we bias human learning over machine learning.

The Turing Test is a famous concept. It is usually thought of as being a test of whether a machine can pass for a human. But perhaps we should flip this around and ask the question; what does it mean instead for a human to behave in such a way that she passes for a machine? And, now, how do we design to avoid it?

Personalization to a user-of-one incentivizes machines to solve for finer and finer predictions of our future selves. But, paradoxically, the prediction of our future selves reduces our ability to freely find those selves. The younger we are, the more pernicious this effect may be. Perhaps the ultimate protection we can give our kids is the right to figure themselves out before an AI does it for them.


Also this week:

  • Interesting article from A16Z on the difficulties of building AI companies compared with traditional software start ups. It’s highly relevant to the Paradox of Personalization, which is reflected in diseconomies of scale. IMO, this is another area where the platforms have a huge advantage, by skirting the diseconomy of personalization and collecting “everything.” It also speaks to the importance of product design in AI. AI is less tolerant to sub-optimal product design.
  • AI in job search and how it makes searching for a job “a living hell,” from Vice.
  • An “art piece” built using data about financial brokers, lobbyists, liquor licenses, business statistics, and US government spending to predict white collar crime by zip code. Wall St… 100% risk of white collar fraud, apparently.
  • Washington Post article on Ring and Nest as “normalization of American surveillance.”
  • CBS News on Ring tightening privacy re data sharing with Google and Facebook. Sort of.
  • Pentagon has adopted a set of AI principles centered around human oversight. Article from Defense One.
  • A summary of key points from this week’s Justice Department workshop on Section 230 from The Verge.
  • The Telegraph profiles workplace surveillance in the UK.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.