Is Regulatory Capture by Big AI Inevitable?

The Biden Administration's Executive Order has stirred up discussion about the regulatory capture in AI. For good reason.

Abstract image of person in a cage

President Biden's recent Executive Order on AI marks a significant moment for the industry. It captures a host of narratives surrounding AI and in doing so has sparked reaction from tech and venture leaders. The Executive Order represents not a law but a strategic political gesture. It doesn’t have the teeth of law, has no enforcement mechanism, and does not align its broad aims with the necessary resources.

Why has it stirred up so much discussion about the risks of regulatory capture, particularly regarding innovation and safety? 

Regulatory capture is a phenomenon where regulatory agencies, established to act in the public interest, instead advance the commercial or special interests of the industry or sector they are supposed to regulate. This happens when the regulators, over time, become more aligned with the interests of the industry players due to various factors like lobbying, the promise of future employment in the industry, or the industry's greater access to information. Regulatory capture is a very human process. There’s no deep magic to it—it’s a result of people being people—a fundamentally social phenomenon. 

Obviously, the point of AI regulation is to give the public a voice on AI safety and use but it’s less obvious how regulation might impact innovation in AI. For the watchdogs of regulatory capture there are legitimate red flags. Nothing opens the door to regulatory capture better than a host of public servants who suddenly have to do a lot more with not much at all—their resource constraints create the perfect entry point for industry to be “helpful.” This is particularly concerning in the realm of AI because of the nature of the technology—difficult to understand—and because of its ubiquity—being a general purpose technology which is in the process of diffusing through the entire economy.

Regulatory capture can stifle innovation by acting as a type of negative feedback, and adding constraints and inertia in the system, making it difficult for outsiders to break in with innovative solutions. Large incumbent companies don’t necessarily stifle innovation—think of Apple’s iPhone, Amazon’s cloud services, or Boeing’s Dreamliner—all paradigm changing innovations hatched inside big companies. But just because Big AI can innovate doesn’t mean it should be given license to dominate AI innovation. 

The risk of regulatory capture hinges on the clear separation of economic drivers from safety considerations. In the AI industry, this requires a deliberate division between the dominant business models that rely on scaling computation, data, and expertise, and the potential risks associated with AI advancements. Transparency, accountability, and limits to the concentration of power are central to regulation of AI. 

It’s an open question as to how this separation will be possible in practical terms because AI safety may depend on scale in competing ways—will larger, centrally managed models be easier to align with human values than smaller, diversified models? And, if so, does safety rely on large capital expenditures, providing a safety incentive for industry concentration?

Larger models are trained on more extensive datasets, which can enable them to learn from a broader range of examples. This comprehensive learning can potentially lead to more accurate, nuanced, and context-aware responses, thereby reducing the chance of errors or harmful outputs. With increased capacity, larger AI models can handle more complex tasks and understandings. They can better capture the subtleties and nuances of human language and behavior, which may contribute to safer interactions by preventing misunderstandings. 

Larger models may better resist adversarial attacks because they have more parameters to 'defend' themselves with. They may have seen more examples of what a malicious input might look like and are better at generalizing from these examples to resist manipulation. 

Larger models tend to generalize better from the training data to unseen data. This means they can make better predictions and decisions in new situations, potentially reducing the risk of unsafe outcomes when encountering unfamiliar scenarios. Some larger models incorporate mechanisms for self-monitoring and self-regulation, allowing them to recognize when they are unsure about something or when their output might be risky or harmful. At least in the medium term, if we want some version of machine metacognition we might need larger models.

Finally, in the domain of reinforcement learning, larger models might be able to simulate more scenarios and learn from a wider variety of experiences, leading to safer policy development before deployment in the real world.

In short, if scaling up models makes them more generally capable then safety needs to be considered a general capability. In this scenario, economics and safety are scientifically intertwined. This is perhaps the fundamental conundrum: if the only safe AI is the largest AI, then concentration in Big AI is inevitable, which makes regulatory capture all but inevitable.

Conversely, it’s possible that alignment is more attainable with smaller (and in the current market, less concentrated) AI.

Smaller AI models are inherently less complex, which makes them easier to understand and debug. The internal workings of these models are more transparent, allowing developers and researchers to more effectively track how decisions are made. This transparency is crucial for alignment, as it enables the identification and correction of misaligned behaviors or biases in the model. 

Smaller models often require less data for training and can be more focused on specific tasks or domains. Smaller models' limited and focused data usage contributes to safer and more predictable outputs.

As AI models increase in size and complexity, they become more prone to emergent behaviors - unexpected results or actions that were not explicitly programmed or intended by the developers. These behaviors can be challenging to predict and control, potentially leading to safety concerns. Smaller models, with their simpler structures, are less likely to exhibit such emergent behaviors. Their outputs are more consistent and aligned with their training.

Success hinges on regulators' deep understanding of the system, enabling them to carve it at its natural joints. The critical junctures are where it’s possible to distinguish between economic incentives and safety concerns. The challenge of regulating AI lies in the intricacies of compute, data, models, and talent—only those deeply immersed in the field for years, entangled in the scientific nuances, can discern the interwoven anatomy of economics and safety. This is a stark contrast to industries like airlines, telecoms, or health, where the critical joints, though firmly bound by institutional ligaments, are discernible and amenable to the blades of strategic intervention.

Regulation is a tricky business. Theoretically-sound solutions often unravel in the face of real-world complexity, where policies are maneuvered, human behavior is unpredictable, and incentives fall short of their mark. Regulating AI is unlike anything we’ve regulated before. It will require regimes that are both top down and bottom up. It will have to account for the emergent properties of the human-machine system and it will somehow have to enable humans to tip the scales in favor of humans and to deal with the asymmetry in speed and scale afforded by AI. 

This EO represents that people are anxious about AI. And it’s a multifaceted worry: AI harming humans versus helping, failing democracies versus unchecked capitalism, trust versus power that can’t be trusted. The Executive Order stands as a declaration—a desire to reclaim the narrative on the future of "intelligence", wresting back the pen that Silicon Valley co-opted. That’s the big question: can the pen be returned at all? And what happens when the pen is permanently replaced by an agent of Big AI? 

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.