There is no technology miracle

"All the data in the world are only as useful as the institutions and leaders that govern its use." - Shoshana Zuboff

Abstract art

Recent headlines about the role of AI in the fight against COVID show the mismatch between people’s expectations and what’s possible. Those that think AI is overhyped are finding plenty of places to point this out, while the hype merchants push isolated solutions that are often divorced from the overall system in which the algorithm operates.

It’s just techies doing techie things because they don’t know what else to do.

- Bruce Schneier, Berkman Klein Center for Internet & Society

Neither extreme is productive but they do show how AI, robotics and autonomous technologies are often misunderstood.

A core misunderstanding is how AI gets its start. AI runs on data and if that data doesn’t exist, there’s no way for AI to begin to understand the world. This means that the current crisis is also a crisis for AI.

You cannot “big data” your way out of a “no data” situation. Period.

- Jason Bay, Government Digital Services, Singapore

The world’s data has been completely disrupted by coronavirus. The impact on the accuracy of AI—particularly AI that relies on supervised learning techniques and in predictive applications such as demand forecasting—must surely be significant.

Another mistake is to assume that machines will be a welcome substitute for humans. The real answer is a lot more complex. We see autonomous machines welcomed when the job is unsafe for a human or where there is a genuine additive aspect to a human’s ability to do their job. Autonomous, learning robots still do not have the broad base of skills and dexterity to replace humans in unpredictable environments. This is much more akin to how robots work in disaster situations rather than a new acceptance of autonomous machines in normal society.

This hints at the limits of their use: no, we won’t see a huge increase in drone-delivered lattes because people will instead worry about the same drone taking their temperature from afar. And I’m skeptical about this being the break-out moment for autonomous delivery—it’s really hard to see how it would be possible to prevent theft and vandalism of such delivery vehicles in a world where unemployment could exceed 32%, at least for a time.

Cute… I’ll take one!

Many hospitals and essential services are using robots instead of people but jobs are not being automated. A recent survey of the role of robots in the pandemic response conducted by researchers at Texas A&M shows that robots either perform tasks that a person can’t do or do safely, or take on tasks that free up responders to handle the increased workload.

The majority of robots being used in hospitals treating COVID-19 patients have not replaced health care professionals. These robots are teleoperated, enabling the health care workers to apply their expertise and compassion to sick and isolated patients remotely.

- Dr Robin Murphy

AI operates inside a system: a chain of tasks or a series of human-machine handovers. So while AI can help identify a potential drug or vaccine candidate, it can’t do much to speed up other parts of the process. Maybe AI can help diagnose COVID, but the ethics of some innovations are not consistent with medical ethics, causing entrepreneurs to withdraw MVP-level products as hastily as they launched them.

Within 48 hours, Carnegie Mellon forced the lab to take down the online test, which could have run afoul of FDA guidelines and be misinterpreted by people regardless of the disclaimer. "It's a perfectly valid concern, and my whole team had not thought of that ethical side of things."

- Rita Singh, Carnegie Mellon, per Business Insider.

Many such innovations have only served to highlight that the core logic of agile development—move fast, experiment, see what works, see what breaks—is completely inappropriate in medicine. Even in a pandemic, there is the right emergency response and the wrong one.

But perhaps the biggest ding on AI right now is that it can’t deliver a miracle. Tech can help with tracking but it can’t do the hard, physical, on-the-ground work of contact tracing, which is what we actually need. Without contact tracing, even with Google and Apple’s integrated app for tracking, we could end up with something that doesn’t work.

The US could become a wild west of incompatible contact tracing apps that vary from state to state and city to city, managed by companies with no public health experience that rake in cash via government contracts while providing the people who use them with a false sense of security.

- Buzzfeed

In all this though, there is hope. I have taken great comfort from Shoshana Zuboff’s recent comments. As author of Surveillance Capitalism and perhaps the greatest critic of Google’s and Facebook’s data strategies, she has appeared recently in forums—Lavin Live and Unrig Summit—surprisingly upbeat given everything going on.

Zuboff says she has “nothing but optimism.” By this, I infer that she’s not only referring to how Google’s and Facebook’s ad revenue will take a pounding, exposing them to the vulnerabilities of the digital market place, I think she’s talking about something bigger. Instead of ignorance or apathy, she sees a ground swell of democratic engagement. Now when she asks, “will the digital future be compatible with democracy?” she doesn’t hear crickets, she hears a whole lot of people asking the same question.

Does fighting COVID-19 mean that we are on a forced march to COVID-1984?

- Shoshana Zuboff.

Extension of surveillance is the perhaps ultimate concern but so is continuing to propagate myths of AI, including the myth of humans being less valuable in an age of AI.

Humans judge humans differently than they judge machines. We judge people based on their intentions, while we judge machines on consequences. This means that people get rewarded for taking risks while machines get punished for making mistakes. And because AI is probabilistic, with false positives and false negatives, AI operating on its own simply can’t win right now.

What’s fine on Spotify—a song that the algorithm predicted you’d like but didn’t, or not being recommended a song you would have liked—is a real problem in medicine. A tracking app harms when it doesn’t work; either driving unreasonable alarm or a false sense of security. Only humans, applying reason and judgment, can balance the consequences of errors from tracking apps and make any app useful.

The end result is an app that doesn't work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

- Bruce Schneier, Berkman Klein Center for Internet & Society

People want humans to lead, to be accountable, to make decisions, to be just, ethical and fair and most importantly to be present, and want machines to serve these goals.

We won’t have a technology miracle because, not only is it not possible, it’s not trustworthy without the trustworthiness of the people and systems that surround it.

What people want is a human miracle. And if AI can help with that, great.

There is only one source of miracle—human ingenuity and creativity.
- Shoshana Zuboff.

Also this week:

  • We decided to change it up a bit and repurposed our human-centered AI design into couple-centered design for COVID. You can download the free resources here and join our Facebook group for more.
  • Ethical AI in recruitment and other HR processes is more important than ever. We’ve launched a service specifically to help HR professionals evaluate the underlying AI, evaluate risk and assess vendors. More details here. If you know someone who would be interested, please share.
  • Listen to Shoshana and others on BBC The Real Story: Governments are deploying new technologies to fight coronavirus. But at what cost? An excellent summary on the discussion around mass surveillance. “A heady cocktail of ideas.”
  • Latest on tech ethics from Data and Society. “The keyword inextricably bound up with discussions of these problems has been ethics. It is a concept around which power is contested: who gets to decide what ethics is will determine much about what kinds of interventions technology can make in all of our lives, including who benefits, who is protected, and who is made vulnerable.”
  • An excellent summary editorial on contact tracing apps from Nature.
  • LinkedIn released new AI tools to help prepare for job interviews, including an automated tool that gives feedback on pacing and sensitive words. It can be accessed immediately after applying or jobs on the LinkedIn Jobs homepage.
  • Clarifying “algorithm audits” and “algorithmic impact assessments.” This new report helps break down the approaches.
  • Hospitals turn to an AI tool to predict which COVID-19 patients will become critically ill — without knowing whether it works. But the tool hasn’t been validated, and hospitals are reaching wildly different conclusions on how to apply the tool. Via Stat News.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.