AI Agents, Mathematics, and Making Sense of Chaos
From Artificiality This Week * Our Gathering: Our Artificiality Summit 2025 will be held on October 23-25 in Bend, Oregon. The
How not to manage in the age of machine employees (and partner machine employees).
Steve’s comment was one of dystopian resignation: “Hard to get to a human for a correction though. It's big tech in 2019.”
Aside from what may be discrimination (which I’ll return to later), there’s a new form of psychological harm that’s made visible by this incident. It’s what I call algorithm anxiety, which rapidly and frictionlessly turns to algorithm anger when people bump up against the combination of an unfair algorithm and people who are not empowered, not informed and not accountable.
Hansson’s customer journey is a perfect example of what happens when machine employees (or in this case, machine contractors) and human employees don’t collaborate.
At this point, the Apple Anger score is a 5/5 too, which isn’t a good thing.
Apple’s partner in the Apple Card is Goldman Sachs so the algorithm is Goldman’s. Apple Card is their first credit card. The algorithm has presumably been built up from its internal dataset of customers as well as external data sources. It’s likely been tested for bias and fairness according to established data science practices. Yet, it seems that no one understands it, can explain it nor looks at the outcome with an ounce of common sense.
“So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Appleand GC are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.” - @DHH
It’s like someone mindlessly following directions on Apple Maps and then driving into a river just because the nav said to.
At the end of the process, Hansson takes to Twitter (lucky for him he’s got a ton of followers and the algorithm is on his side) and is able to get Apple to launch an internal investigation. Goldman’s public response? “Our credit decisions are based on a customer’s creditworthiness and not on factors like gender, race, age, sexual orientation or any other basis prohibited by law.”
Now, New York’s Department of Financial Services has initiated a probe into the credit card practices of Goldman Sachs. This is great news because NY is a state where there are explicit laws that recognize that algorithmic discrimination can occur by proxy. A statement from the Department says, “Any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class violates New York law.”
Proxy discrimination is something new - and it happens because of how AI works. Powerful AI will find correlations that humans can’t find. This is part of its value. But if an AI isn’t allowed to use data because it’s illegal to use – say race or gender – and if this data is predictive of a certain outcome, then an AI will naturally find proxies without a human knowing. As an AI looks for less and less intuitive proxies, the AI will not be able to disassociate these predictions from what shouldn’t be used. And if an AI can’t, then a human most certainly won’t be able to.
Goldman Sachs employees don’t understand how AI works because if they did they wouldn’t have put themselves in a situation where they defend a machine employee’s discriminating behavior by denying the obvious. Just because the inputs aren’t from protected classes, doesn’t make the outputs non-discriminatory.
One interesting question is why is this happening now, with Apple’s/GS’s first credit card and why in this demographic ie (relatively) wealthy Apple users? It’s impossible to know but my suspicion is that the inferences made by the GS algorithm are amplifying historical patterns in female income earning - where income drops in peak childbearing years and fails to keep pace with male income in wealthier demographics. This could go back years. Gen Z, Boomers, The Greats. The fact is that, even if it was scrutable, it’s a propriety algorithm so it’s protected by Goldman’s property rights and we may never know. We’re seeing it now because Apple Card is based on Apple ID so is individual by design. This has exposed something which may previously have been hidden because couples often apply jointly for a credit card.
All of this is messy. Goldman takes the stance that “creditworthiness” is a simple measure, conjured with truth in the data, no matter the outcome. Apple has distanced itself, something that seems weirdly off-brand, especially in a topic so sensitive and core to Apple values. Inferences, after all, could well be personal data and therefore part of the broad argument on privacy. The machine employee is untouchable, while the human employees, who should be accountable, just point to the wisdom of the machine. The people on the front-line are utterly disempowered, both by the machine and by a structurally flawed organizational design.
What all this says is that we need algorithms to be treated as machine employees. We don’t condone discrimination in employees because they grew up learning how to discriminate, no matter what their prior experience. We don’t defend their behavior because they are only reflecting what the data says. We don’t let them off the hook for explaining themselves because we don’t understand how their brain works. We certainly shouldn’t leave front line people to explain things to customers that leaders don’t understand themselves.
I’ll leave the last word to Hansson:
Apple and Goldman Sachs have both accepted that they have no control over the product they sell. THE ALGORITHM is in charge now! All humans can do is apologize on its behalf, and pray that it has mercy on the next potential victims.
This week, only one pick: an excellent article from the Washington Post on HireVue. Ever since the science of emotional analysis from facial expression was busted as pseudoscience, I’ve been hot on HireVue (and not in a good way). This article is important - especially towards the end where it describes the lengths that people go to try and game (or please) the algorithm. I buy that humans are biased and there’s role for AI in recruitment, but I am skeptical that its use in the wild delivers on any of the lofty claims of increasing diversity. Without top-notch deployment and use, it could simply be amplifying bias across hundreds of companies and thousands of applicants. The customers of HireVue need to do better. Don’t just show us how it makes recruitment more efficient and give us platitudes, show us how you train it, how you manage bias, how you make it aspirational, how it fits your stated values and how it is demonstrably improving recruitment. HR managers and recruiters we talk to all genuinely want to find people who would be great but who get screened out by traditional systems. For applicants, the new angst is figuring out how to please an AI by having the “right” emotions.
The Artificiality Weekend Briefing: About AI, Not Written by AI