We need to modernize AI regulation

Regulation needs to be proactive. Here’s two ways that can happen.

Abstract image of a courthouse

AI regulation is on its way. Sundar Pichai, CEO of Google, generated a lot of buzz recently with an op-ed he wrote for the The Financial Times calling for greater regulation but that it should be “a relatively light touch that recognizes the need to avoid a one-size-fits-all approach.” Tony Blair, the UK’s former PM, is on the record as saying that big tech - and by implication AI - is a utility and should be regulated accordingly. The US government has called for public input (due March 13) on the regulation of AI, where, right now at least, the focus is on staying in the lead without impinging on the rights and values of US citizens.

Regulation is going to get stuck. Because of how AI actually works, accuracy and fairness are in competition. An AI will be racist or sexist or unfair simply as a natural by-product of optimizing for accuracy or profit. It won’t deliver “fair” or “trust” or “non-discriminatory” on its own. What makes AI regulation so difficult is that, although in theory there are many existing laws that can work just fine for AI, in practice there’s a missing piece.

AI outpaces humans. It takes time for people to see that damage has been done and it’s not possible to see harm as it’s happening. Reactive regulation will be ineffective because it leaves regulators without any technical way to regulate. All they have are organizational and human responses - oversight committees and corporate processes.

Regulation needs to be proactive. Here’s two ways that can happen.

Real-time monitoring

We need humans to be able to understand what machines are doing, even if they can’t match their speed and scale, or understand the deep complexity of the algorithms, data and models. The financial industry has largely solved this problem. FINRA, the industry’s self-regulatory agency has direct access to highly granular trading data with sensors placed directly into data feeds.

In a recent Brookings article, the authors of The Ethical Algorithm, explain how the financial industry uses technology to monitor technology. The solution is to monitor selectively; placing sensors where the most important algorithmic decisions are made and knowing what to looking for. This same idea could be applied to big tech AI - the science is understood, the measures are known and the techniques are available. Monitoring could theoretically be run completely from the outside, perhaps using composite systems or only at specified times. It could be built off to the side and never deployed in production. It could, in short, be non-intrusive.

If there really is gender bias in the credit limits granted for Apple’s new credit card (as has been alleged anecdotally), it could be discovered by regulators in a controlled, confidential, and automated experiment with black-box access to the underlying model. If there is racial bias in Google search results or housing ads on Facebook, regulator-side algorithms making carefully designed queries to those platforms could conceivably discover and measure it on a sustained basis.

- Michael Kearns, Aaron Roth, The Ethical Algorithm.

Fairness by design

The behavior of an AI system relies to some extent on its post-design experience. In AI design, more effort has to go in upfront to define intent, anticipate consequences and uncover sources of bias. Fairness has to be designed in from the beginning.

Today, the problem is that it’s an afterthought, if at all. Fairness slows down development. Most engineers and project managers have no experience of AI design and don’t think about fairness. Many companies avoid fairness design and testing because of the legal liability of discovering that their AI is, in fact, unfair. Discovering discrimination leaves the company in a worse position.

Privacy-by-design is the precedent for fairness-by-design. Privacy-by-design was first thought of in the 1970s and was then incorporated in the 1990s into the data protection directives. Privacy-by-design principles are fundamental to GDPR and, although in need of a refurb for the age of AI, they have had a meaningful impact in advancing better regulation.

In AI we need fairness-by-design in regulation because it’s probably one of the best ways to have people think about the unique requirements of AI design. A mindset where speed and MVP (minimum viable product) is more important than fairness and MVD (minimum viable data) is tough mindset to break. The market rewards the former and there are no guard rails to argue for the latter. No mal-intent is required for AI to be mean.

Consider this example of a small company called Porkbun, based in Portland, Oregon whose business is domain names. I bet they have no idea just how awful their “name spinner” AI-generator is for women and black girls.

“Man” is passable, if a bit odd and not all that useful:

Woman is pretty terrible:

Black Girl is really dreadful:

I don’t know the data source but, based on Safiya Umoja Noble’s work, it’s probably just the regular old internet. Here’s the thing: the designers of this widget probably have no idea how the internet is an algorithm of oppression for black women.

We need regulation because we need people to start thinking more about what goes into AI; the people, the data, the design. In much the same way as regulation of food safety includes employees’ hygiene (“employees must wash their hands before returning to work”) we need regulations in AI that focus on the quality of the inputs.

We need to take a proactive rather than reactive approach. Machines are too fast, are too big and do too much damage before anyone notices. Don’t believe the story that AI regulation is a tweak on what we have in telecoms or utilities or media. We know the underlying science, which tells us how we can make use of intentional design principles and real-time monitoring in market-oriented ways so that we do not reveal proprietary intellectual property or stifle innovation.


Also this week:

  • From Sonder Scheme, an important update for companies using or creating emotional AI - designing for context; scientists study emotional expressions in context and beginning to understand how much more varied, nuanced and context-dependent emotional expression is. AI science needs to follow.
  • Clearview AI - as Dave and I discussed on this week’s podcast - is working on cameras now. They were also disabled on Apple’s app store for breaking rules on distribution. And in another case of different rules for the rich and powerful, the New York Times found that Clearview AI was all the rage amongst investors and others the company approached for investment. As we mentioned, this company’s leaders seem determined to game the system until it finds the threshold for regulation.
  • Quite the essay: The Prodigal Tech Bro, from Maria Farrell in The Conversationalist. “Allowing people who share responsibility for our tech dystopia to keep control of the narrative means we never get to the bottom of how and why we got here, and we artificially narrow the possibilities for where we go next. And centering people who were insiders before and claim to be leading the outsiders now doesn’t help the overall case for tech accountability. It just reinforces the industry’s toxic dynamic that some people are worth more than others, that power is its own justification.”
  • An article from the WSJ on how employers are now tracking happiness. If you aren’t a subscriber, try this link to WSJ’s Twitter post.
  • A link to an NBC article detailing how Google tracked a man’s bike ride past a burglarized home. That made him a suspect. This is how “nothing to hide, nothing to fear” breaks down.
  • Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats. Interesting insights into how modeling public health is different in the age of AI. From The Conversation.
  • An interesting article from The Telegraph which reports on research into how surveillance changes how we think. If we know we are being watched, we are harder on ourselves. Weird.

If you’re inclined (I will be): US Government public submission on AI regulation, including bias and American values (due 3/13)

https://www.regulations.gov/document?D=OMB_FRDOC_0001-0261

Memorandum calls on agencies, when considering regulations or policies related to AI applications, to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property. The draft Memorandum is available at https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.