Machine employees in government

What does it mean if AI is doing the job of government employees?

An abstract image of office workers

In 2016, as Intelligentsia Research, we coined the term “machine employees” to describe AI in the workplace. Our goal was to differentiate traditional IT technologies from modern AI; where deep learning and other machine learning techniques were being deployed with the specific intent of taking on a decision making role.

The concept of machine employees is important because, while they can substitute for humans in processes where people once evaluated, decided and acted, they aren’t held to the same standards as people. They can’t be easily questioned, sued or taxed. In short, machine employees aren’t accountable, but the humans that employ them should be. A human should be able to explain, justify and override a poorly performing machine employee.

But everywhere you look, it seems there are more and more instances of machine employees that are poorly designed and deployed, with government services a particular concern. A recent essay in the Columbia Law Review by Kate Crawford and Jason Schultz from AI Now, outlines where AI systems used in government services have denied people their constitutional rights. They argue that, much like other private actors who perform core government functions, developers of AI systems that directly influence government decisions should be treated as state actors. This would mean that a “government machine employee” would be subject to regulation under the United States Bill of Rights, which would prohibit the federal and state governments from violating certain rights and freedoms, something that is a particular risk when services are provided to groups where people are already at a disadvantage.

Here is the key question — are AI vendors and their systems merely tools that government employees use or does the AI perform the functions itself? Are these systems the latest tech tool for human use or is there something fundamentally different about them? If the intent of a machine employee is to replace a human employee, or substitute a significant portion of a their decision making capability, then our intuitions tell us it’s the latter.

There are horror stories about some of these government AI systems. In Arkansas, cerebral palsy patients and other disabled people have had their benefits cut by half with no human able to explain how the algorithm works. In Texas, teachers were subjected to inscrutable employment evaluations. The AI vendor fought so hard to keep source code secret that, even in court, “only one expert was allowed to review the system, on only one laptop and only with a pen and paper.” And in DC, where a criminal risk assessment tool for juveniles rated as “high risk” constrained sentencing choices and only displayed options for treatment in a psychiatric hospital or a secure detention facility, drastically altering the course of children’s lives. Perhaps the most egregious case is Michigan’s use of AI for unemployment benefit “robo-determination” of fraud. The system adjudicated 22,000 fraud cases with a 93% error rate. 20,000 people were subject to highest-in-the-nation quadruple penalties, tens of thousands of dollar per person.

In public services, the bottom line is always to cut costs and increase efficiency. But when the “most expensive” populations are also the ones that require the most support because they are economically, politically or socially marginalized, and decisions about them get made by inscrutable, biased machine employees, while deployed by people who have not been trained or themselves are poorly supported, the potential for harm is high.

In all these situations, human employees were unable to answer even the most basic questions about the behavior of the systems, much less change the course of the outcome for individuals.

One advantage of government — ie public services and public accountability — is that we actually know this stuff now. Thanks to the courts. But these are one-off cases and there’s no systematic way to protect against similar harms being inflicted on others. In fact, government procurement processes can make this worse; AI systems are increasingly adopted from state to state through software contractor migration. They can be trained on historical data from one state that isn’t applicable to another’s without any consideration for the differences in populations. Patterns of bias can proliferate and even stem back to the intentions of one individual employee.

If good design (including explainability, accountability to humans and human-in-the-loop protections) and regulation fails, we will need something in the middle. The idea that AI systems developers are actually state actors — “government machine employees” — is potentially an important way to bridge the current AI accountability gap.


Also this week:

  • A must-read piece from the NYT on Clearview.ai and facial recognition and its use in surveillance. Now everyone’s everyone’s face is searchable from images taken from Facebook, Twitter, Youtube and Venmo against stated company policy. This isn’t only about surveillance and privacy, this is also test of whether big tech can self-regulate and stop the practice powering surveillance.
  • Terrific piece in the Boston Review from Annette Zimmermann, Elena Di Rosa and Hochan Kim on how technology can’t fix algorithmic injustice. This is totally worthy of your time.
  • Interesting reporting from The Telegraph (registration required) on Google’s bias busting team who get together and swear, curse, be racist and sexist as an in-house way of teaching their AI not to respond to racist and sexist comments. I particularly liked this quote from the SF correspondent: “Trying to boil the prejudice out of this gigantic data stream feels like standing in the middle of a raging river trying to catch refuse in a net. I'm glad for the people who swear at Google, but I wonder how effective they can really be without some deeper, more fundamental realignment.”
  • Useful and intense resource on current state of AI ethics from the Berkman Klein Center for Internet and Society at Harvard University.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.