Microsoft publishes the science of human-machine collaboration

Microsoft researchers recently published one of the most exciting advances in AI this year, IMO.

An abstract represenation of an algorithm

Microsoft researchers recently published one of the most exciting advances in AI this year, IMO. I’m not referring to the company’s announcements regarding huge new NLP neural networks using semi-supervised learning, or the company’s new supercomputer, or the new tools for fairness in AI, although these are all impressive and useful. The new research is in human-machine collaboration.

Human-machine collaboration has been a hot area for years but, outside of social robots, it’s been more the domain of non-technical people than technical people. There simply hasn’t been much in the way of a scientific methodology for engineers to take a hybrid approach. While machine learning engineers focus on the frontier of mathematical and computational capabilities, in the absence of methods for human-machine team work, technology has generally marched on in isolation of the human factors. So while we can talk about “augmentation” of human skills, the reality is often different; shitty automation, sub-optimal task design, algorithmic aversion, hidden bias.

New research changes this. Now we have the math to train models in hybrid human-machine systems, taking into account what it means to consult a human.

Here it is: (or one sample of it)

Got that? Great.

In a new paper called “Learning to Compliment Humans,” Eric Horvitz, director of research at Microsoft, along with fellow researchers from Microsoft and Harvard, explain how they have formalized the math to design AI that has humans and machines work better together. It’s this mathematical formalization that makes this paper so important because now there’s no excuse for AI practitioners not to design AI systems that leverage the best of human and machine capabilities at the same time.

The methods presented are aimed at optimizing the expected value of human-machine teamwork by responding to the shortcomings of ML systems, as well as the capabilities and blind spots of humans.

The standard way of training models which are used to compliment human decision making is to train in isolation—have the model produce the highest accuracy in isolation before putting it in front of a human. This new approach is different—the model is trained in such a way that it is forced to consider the distinct abilities of humans and machines. Training takes into account the “cost” of consulting a human and uses well-established AI techniques such as backpropagation to encode the unique skill of a human alongside the capability of the machine.

Think of it as a formal way to teach a machine how to make the trade-off between what it needs to learn to do to be accurate at a task, but isn’t inherently good at compared to a human.

This is a totally different mindset as well as technical approach.

  • it optimizes the combined performance of the human-machine system, increasing the accuracy of task overall.
  • joint training allows for smaller models and helps a model focus its limited predictive ability on the most important regions (of hyperspace), while matching humans’ top abilities with where the AI can afford to be less accurate.
  • opens up a new design opportunity—tuning for asymmetric loss, where the impact of a false negative is much more than the impact of a false positive, for example.

The researchers tested their joint training approach on two problems: identifying galaxies using citizen science and diagnosing metastatic breast cancer in pathology slides.

For the star gazing application, joint models which optimize for complementarity uniformly outperformed fixed models; by anywhere between 10 and 73%. For the cancer task, improvements were up to 20%.

But the math allows the researchers to go further and to explore what factors are most influential. Because a smaller capacity model has more potential bias (because it represents less complex hypotheses and can’t fit the “truth” as well), there has to be a “tighter fit” between training and team performance. In theory, it’s possible to just throw more data at the problem and make more complex models but in practice this increases the risk of overfitting, so having simpler, better performing models is more useful. This approach helps with tuning for this and hints at new ways to value human expertise in training and developing AI.

The value of complimentary training is especially high when there is an asymmetry between error costs. In cancer diagnosis, for example, a false negative is much worse than a false positive—missing a cancer that’s present is way worse than subjecting someone to unnecessary intervention and anxiety. This technique allowed researchers to show that using a combined system is particularly valuable when these costs are asymmetric—the gap between the joint and fixed models grew as any asymmetry grew. This finding surely should have a big impact on algorithmic design in medical systems.

Humans and machines make different kinds of mistakes. This research was able to identify and quantify this affect as there was a very clear structure to the human error. In one experiment, a large portion of the human errors were concentrated in a small portion of the instances, identified with only two features. The joint model could then prioritize this region. While this came at the expense of lower accuracy in a different region, this was where the human had almost perfect accuracy so the overall effect was still an improvement.

The distribution of errors incurred by the joint model shifts to complement the strengths and weaknesses of humans.

There is now a mathematical way to train AI to take in a human’s knowledge and skill as the AI learns. This means that AI practitioners are now able to optimize the human-machine team performance when interactions between humans and machines extend beyond querying people for answers—say in settings where there are complex, interleaved interactions and that include different levels of human initiative and machine autonomy.

We see opportunities for studying additional aspects of human-machine complementarity across different settings.

This is a huge step scientifically but it’s one that’s also important psychologically. Now human-centered AI can be expressed in a way that machines truly understand. That should have everyone excited.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.