AI manipulation will force us to rethink privacy

The gaming industry confronts an AI designed specifically to turn players into big spenders.

Abstract art

This is a classic example of modern AI - take easily available toolsets developed by one of the platforms (Amazon, Google, Microsoft), apply them to a problem that has a lot of data available for training and deploy in an application that generates even more granular data, fast, vast and in real-time. Use the AI to predict user behavior, then use the AI to guide users into behaviors that make them even more predictable.

Gaming is perfect, because the objective is monetization of users, which is a defined and measurable goal, affected by multiple, subtle, behavioral factors that only an AI can capture, aggregate, understand and respond to.

Henry Fong, the CEO of Yodo1 which owns the game, describes himself as “lazy” which is why he likes to have AI do the work. In 2018 he decided to teach an AI to moderate a community of millions of users, find the potential whales and then figure out how to get them to stay and spend even more. The AI looks for patterns in spending velocity, the amount of time players spend in the game, how many sessions, what guilds they're in and what they are likely to buy. It then predicts what the player will do if offered certain paths or game-in-bundles. The accuracy of the AI after two weeks of training was 87% and Fong thinks he can get it up to 95%.

AI is great at finding things that are non-intuitive to humans so, perhaps unsurprisingly, the AI found behavioral patterns that ran counter to the expert’s intuition.

"The funny thing is, I always used to think that if you monetise your audience too hard, they'll leave the game. But it's actually the other way around. Once they start spending, they don't leave. They want to stay in the game and preserve their investment, and when they stay in the game, they spend more." - Fong

Also unsurprisingly, the revelation kicked off a debate about whether this is an ethical use of AI. On one side, people are free to spend whatever they want. On the other side, the designers have a responsibility to users - whether it’s understanding whether they are in a position to spend that much money through to whether it’s ok to prey on addicts who are unable to control their impulses. As Handrahan pointed out in his op-ed, the designers completely neglected a third option - have people stay in the game and not spend beyond what they can afford.

All of this raises a modern privacy question, one we should ask in the Age of AI. In a situation where a powerful intelligence knows more about a user’s future behavior than they do, what is the user’s right to have their autonomy preserved?

A user has a starting preference (perhaps it is to not spend more than $x and not spend more time than y hours). Because of the power of the AI to predict the user’s next preference, the AI has a huge advantage - it can present an option that it is pretty sure will adjust the user’s preference away from their starting preference. While humans can be good at doing this too (are you sure you wouldn’t like a second slice of cake?!?), AI is unique in how it accomplishes this feat. An AI can present information in unexpected and non-intuitive ways, it can process data at speeds and scale that humans can’t and it can use all the data from all the other users who have gone before or who exist at the same moment. And it can adjust preferences in rapid succession, much faster than a human could in a similar position of influence.

AI forces us to rethink privacy because AI uses our personal data and the data of others to create inferences about us which are then used to target us personally. Much of these data are not intuitive. Because AI is so pervasive in data collection and aggregation, an AI system can learn about us in a passive way. It can know more about our online preferences than we do. Eye-tracking, mouse movements, and keyboard strokes can hint at our state-of-mind. This means that our hesitations, doubts and frustrations are no longer private.

We’ve been conditioned to think that privacy is about secrecy, choice, consent and control. Rules, disclosures and terms of service articulate a user’s control of inputs: their personal data and how it will be collected, stored and used. AI shifts the paradigm because what matters more are outputs; what inferences are made about us and how we keep our inner most preferences, weaknesses and thoughts obscure. Privacy is fundamental to keeping our autonomy - our ability to make the decision that best fits our future selves.

Fong cares about one thing - monetization - and he’s been able to build an AI that is uniquely capable of manipulating some people into spending colossal amounts of money. In his defense, the game is designed with price elasticity analysis built in - where a child or someone who the AI predicts will be unable to afford to play is only offered very high prices. This is the AI predicting that they will not make a purchase. Really, it’s a great example of AI-enabled “persuasion profiling,” where different users are offered different prices and opportunities based on whether they are likely to value the product more. But it sure risks identifying people's cognitive errors instead and involving people who probably shouldn’t be engaging in the transaction in the first place and who will ultimately experience confusion or regret.

This is a case of an AI that’s so narrowly optimized for a single purpose no matter the consequences. An important mark of ethical AI is that its design is not so singularly focused.


This week, some links to visualizations or explainers that I think are especially good.

  • Google’s graphic novel on machine learning. The explanation of gradient descent is so simple and elegant. Love it!
  • A terrific visualization of Bayes (conditional probability) and Markov Chains (Google’s search algorithm, PageRank).
  • Brilliant visual tour on algorithms and their application in a business/value chain, done by StitchFix. This is information architecture at it’s best.
  • An educational interactive game on AI bias in hiring and why it’s so difficult to deal with.
  • A curious website that can keep you hooked for a while. Every face is generated by a GAN and is not real. It’s fun spotting why it’s fake.
  • A useful visualization of fairness and the mathematics around defining it.

Also, Google Duplex is getting it’s first trial run outside of the US for New Zealand’s labor weekend. Kiwi subscribers - I’d love to hear any stories and thoughts. My views haven’t changed since I wrote this story in Quartz when it first launched. It’s an ideal way to keep us online instead of in the physical world and that keeps us clicking ads.

I’ve been enjoying the podcast from DeepMind. It’s well produced and adds a lot of color to how DeepMind thinks about AI.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.