Apple Card and algorithm anger

How not to manage in the age of machine employees (and partner machine employees).

Abstract image of a smashed apple

Steve’s comment was one of dystopian resignation: “Hard to get to a human for a correction though. It's big tech in 2019.”

Aside from what may be discrimination (which I’ll return to later), there’s a new form of psychological harm that’s made visible by this incident. It’s what I call algorithm anxiety, which rapidly and frictionlessly turns to algorithm anger when people bump up against the combination of an unfair algorithm and people who are not empowered, not informed and not accountable.

Hansson’s customer journey is a perfect example of what happens when machine employees (or in this case, machine contractors) and human employees don’t collaborate.

  • Call company customer service. Approach initially with an open mind, confident - because this is Apple after all - that this will simply be a matter of having a human both explain a “why” and resolve the obvious absurdity.
    Algorithm Anxiety Score: 2/5.
    Algorithm Anger Score: 0/5.
  • Speak to two Apple reps who are unable to do anything. She doesn’t “know why, but I swear we’re not discriminating. It’s just the algorithm.”
    Algorithm Anxiety Score: 3/5.
    Algorithm Anger Score: 1/5.
  • Rep explains that she can’t access the real reasoning and suggests calling TransUnion to check.
    Algorithm Anxiety Score: 5/5.
    Algorithm Anger Score: 2/5.
  • Call TransUnion, sign up and pay $25/month for “credit check bullshit shakedown.”
    Algorithm Anxiety Score: 5/5.
    Algorithm Anger Score: 5/5.
  • Try to second-guess every possible twist and turn the algorithm might make - green card versus citizen, higher credit score versus lower credit score, she didn’t smile enough when she applied, she wrote something down wrong? When nothing else makes sense, the only conclusion is ovaries versus testicles, because this is the obvious one to a human.
    Algorithm Anxiety Score: 5/5.
    Algorithm Anger Score: 5/5.

At this point, the Apple Anger score is a 5/5 too, which isn’t a good thing.

Apple’s partner in the Apple Card is Goldman Sachs so the algorithm is Goldman’s. Apple Card is their first credit card. The algorithm has presumably been built up from its internal dataset of customers as well as external data sources. It’s likely been tested for bias and fairness according to established data science practices. Yet, it seems that no one understands it, can explain it nor looks at the outcome with an ounce of common sense.

“So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Appleand GC are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.” - @DHH

It’s like someone mindlessly following directions on Apple Maps and then driving into a river just because the nav said to.

At the end of the process, Hansson takes to Twitter (lucky for him he’s got a ton of followers and the algorithm is on his side) and is able to get Apple to launch an internal investigation. Goldman’s public response? “Our credit decisions are based on a customer’s creditworthiness and not on factors like gender, race, age, sexual orientation or any other basis prohibited by law.”

Now, New York’s Department of Financial Services has initiated a probe into the credit card practices of Goldman Sachs. This is great news because NY is a state where there are explicit laws that recognize that algorithmic discrimination can occur by proxy. A statement from the Department says, “Any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class violates New York law.”

Proxy discrimination is something new - and it happens because of how AI works. Powerful AI will find correlations that humans can’t find. This is part of its value. But if an AI isn’t allowed to use data because it’s illegal to use – say race or gender – and if this data is predictive of a certain outcome, then an AI will naturally find proxies without a human knowing. As an AI looks for less and less intuitive proxies, the AI will not be able to disassociate these predictions from what shouldn’t be used. And if an AI can’t, then a human most certainly won’t be able to.

Goldman Sachs employees don’t understand how AI works because if they did they wouldn’t have put themselves in a situation where they defend a machine employee’s discriminating behavior by denying the obvious. Just because the inputs aren’t from protected classes, doesn’t make the outputs non-discriminatory.

One interesting question is why is this happening now, with Apple’s/GS’s first credit card and why in this demographic ie (relatively) wealthy Apple users? It’s impossible to know but my suspicion is that the inferences made by the GS algorithm are amplifying historical patterns in female income earning - where income drops in peak childbearing years and fails to keep pace with male income in wealthier demographics. This could go back years. Gen Z, Boomers, The Greats. The fact is that, even if it was scrutable, it’s a propriety algorithm so it’s protected by Goldman’s property rights and we may never know. We’re seeing it now because Apple Card is based on Apple ID so is individual by design. This has exposed something which may previously have been hidden because couples often apply jointly for a credit card.

All of this is messy. Goldman takes the stance that “creditworthiness” is a simple measure, conjured with truth in the data, no matter the outcome. Apple has distanced itself, something that seems weirdly off-brand, especially in a topic so sensitive and core to Apple values. Inferences, after all, could well be personal data and therefore part of the broad argument on privacy. The machine employee is untouchable, while the human employees, who should be accountable, just point to the wisdom of the machine. The people on the front-line are utterly disempowered, both by the machine and by a structurally flawed organizational design.

What all this says is that we need algorithms to be treated as machine employees. We don’t condone discrimination in employees because they grew up learning how to discriminate, no matter what their prior experience. We don’t defend their behavior because they are only reflecting what the data says. We don’t let them off the hook for explaining themselves because we don’t understand how their brain works. We certainly shouldn’t leave front line people to explain things to customers that leaders don’t understand themselves.

I’ll leave the last word to Hansson:

Apple and Goldman Sachs have both accepted that they have no control over the product they sell. THE ALGORITHM is in charge now! All humans can do is apologize on its behalf, and pray that it has mercy on the next potential victims.

This week, only one pick: an excellent article from the Washington Post on HireVue. Ever since the science of emotional analysis from facial expression was busted as pseudoscience, I’ve been hot on HireVue (and not in a good way). This article is important - especially towards the end where it describes the lengths that people go to try and game (or please) the algorithm. I buy that humans are biased and there’s role for AI in recruitment, but I am skeptical that its use in the wild delivers on any of the lofty claims of increasing diversity. Without top-notch deployment and use, it could simply be amplifying bias across hundreds of companies and thousands of applicants. The customers of HireVue need to do better. Don’t just show us how it makes recruitment more efficient and give us platitudes, show us how you train it, how you manage bias, how you make it aspirational, how it fits your stated values and how it is demonstrably improving recruitment. HR managers and recruiters we talk to all genuinely want to find people who would be great but who get screened out by traditional systems. For applicants, the new angst is figuring out how to please an AI by having the “right” emotions.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.