AI Systems are Discovering a Shared Model of the World. It's called "reality."

A recent paper entitled The Platonic Representation Hypothesis supports the idea that widespread use of singular algorithms might actually be simplifying hte world. Large-scale AI models, such as GPT-4, are all starting to learn the same representations of human digitized reality.

Research paper: The Platonic Representation Hypothesis

Key Points:

  • Global Complexity and Simplification: David Krakauer of the Santa Fe Institute argues that globalization and the use of singular algorithms are simplifying rather than increasing global complexity, resulting in less robust and interesting systems.
  • Platonic Representation Hypothesis: AI models like GPT-4 are converging towards a shared model of reality, akin to the shadows on the wall in Plato’s cave, driven by factors such as multitask scaling, computational capacity, and simplicity bias.
  • Multitask Scaling Hypothesis: As AI models are exposed to more tasks and data, the space of possible representations narrows, pushing them towards a common solution.
  • Capacity Hypothesis: Larger AI models, with greater computational capacity, are more likely to find and converge to optimal representations of tasks compared to smaller models.
  • Simplicity Bias Hypothesis: AI models inherently favor simpler solutions, which strengthens as they scale up, leading to convergence on simple, common representations.
  • AI’s Expanding Applications: The convergence of AI models’ representations enhances their ability to reason about and interact with the world, improving performance in language tasks and visual structure understanding.
  • Limitations and Pitfalls: AI models are constrained by their training data, potentially limiting their ability to represent unrecorded aspects of reality, raising concerns about over-reliance on AI’s models of reality.

At a recent conference on increasing global complexity that we attended, David Krakauer, head of the Santa Fe Institute, kicked off the opening keynote panel. The moderator asked why the world is becoming more complex. In a typically contrarian manner, Krakauer responded, "Well, I'm not sure that it is." He argued that globalization and the widespread use of singular algorithms might actually be simplifying the world. This loss of complexity, he suggested, makes our systems less robust, less heterogeneous, and less interesting.

A recent paper entitled The Platonic Representation Hypothesis supports this idea. Large-scale AI models, such as GPT-4, are all starting to learn the same representations of human digitized reality.

In the allegory of Plato's cave, prisoners chained in a cave see only shadows of objects cast on the wall, mistaking these shadows for reality itself. I notice this allegory coming up a lot recently in the AI discourse and have wondered why. I think it's because it's a useful way to suggest that AI systems, like the prisoners, are beginning to construct representations of reality based on the limited "shadows" of data they are trained on.

The paper argues that as AI models are trained on increasingly large and diverse datasets, their representations are converging towards a shared, underlying model of reality. The researchers propose that this convergence is driven by several factors. First, as models are trained on more tasks and data, the space of possible representations that can solve all these tasks becomes increasingly constrained, leading to convergence. Second, larger models have a greater capacity to find these common solutions. Third, the inductive biases of neural networks, particularly simplicity bias, further guide them towards similar representations.

The Multitask Scaling Hypothesis: As AI models are trained on an increasing number of tasks and data, the space of possible representations that can solve all these tasks becomes smaller. In other words, there are fewer ways to represent the world that are compatible with a large and diverse set of tasks and data. This pressure towards a common solution increases as models are scaled to more tasks.
The Capacity Hypothesis: Larger AI models, with their greater computational capacity and flexibility, are better able to find and converge to the optimal representation that solves a given set of tasks. While smaller models might find a diversity of sub-optimal solutions, larger models are more likely to converge to the same, optimal representation.
The Simplicity Bias Hypothesis: AI models, particularly neural networks, have an inherent bias towards simple solutions. Given a choice between a complex and a simple representation that both solve a task, these models will tend to converge to the simpler one. As models are scaled up, this bias becomes stronger, further driving convergence to a common, simple representation.

This growing ability of AI to model reality could extend the applications of AI, enabling systems that can reason about and interact with the world in increasingly sophisticated ways. Research shows that jointly training language and vision models enhances their performance on language tasks, demonstrating cross-modal synergy. Additionally, language models trained solely on text data can develop a rich understanding of visual structures, as effective visual representations can be generated from code produced by these models.

AI could become a powerful tool for understanding complex systems, from biology to economics, by uncovering the underlying structures and dynamics of these systems from vast amounts of data.

However, the researchers also acknowledge several limitations and potential pitfalls. Models are ultimately constrained by the data they are trained on, and may struggle to represent aspects of reality that are not well-captured in this data. There is an implicit assumption here—that digitized human knowledge is the limit of "reality."

As AI becomes "ready" to represent the world to us, will this limit our own understanding and exploration? If we come to rely on AI's models of reality, we may become like the prisoners in Plato's cave, mistaking the AI's representations for reality itself.

On the other hand, AI could also be seen as a tool to extend our understanding, like a telescope that extends our experience. By revealing patterns and connections that are beyond our immediate perception, AI could help us break free from our own "caves" of limited understanding. Maybe, but it seems that the unification of reality brought by foundation models will require more than just the simplification they can cause.

One indication of representational convergence is the rising number of systems built on top of pre-trained foundation models. These models are becoming standard backbones across a growing spectrum of tasks. Their versatility across numerous applications implies a level of universality in the way they represent data.

There are questions and lessons here for spatial reasoning and robotics.

The Platonic Representation Hypothesis has interesting implications for the ability of language models to perform spatial reasoning tasks. While the paper primarily focuses on the convergence of representations across different AI models and modalities, it also suggests that language models, through their exposure to vast amounts of text data, may be learning representations that capture spatial and physical properties of the world.

One of the key lessons here is that language models, despite being trained solely on text, can develop an understanding of spatial concepts and relationships. This is because language often encodes spatial information, such as the relative positions of objects, their sizes, and their interactions. As language models are trained on larger and more diverse datasets, they may begin to extract and internalize these spatial concepts, forming representations that mirror the physical world.

This has implications for tasks such as natural language navigation, where an AI system must understand and reason about spatial relationships described in text. A language model that has learned a robust representation of space and physics may be better equipped to handle such tasks, even without explicit spatial training.

However, language models may struggle with spatial concepts that are poorly represented in their training data, or that require a more grounded understanding of the physical world. There is also a risk of language models learning spatial biases or misconceptions present in the text they are trained on. Most of us might not realize that we have spatial reasoning biases as well as regular, more well-known cognitive biases. For example, people tend to perceive distances between familiar locations as shorter than distances between less familiar ones. For example, frequent travelers between Salt Lake City (SLC) and San Francisco (SFO) might perceive this route as shorter than an equally long route between Baltimore and New York City (NYC) if they are less familiar with the latter.

Another lesson is the potential for synergy between language models and other modalities, such as vision. If language models can develop spatial representations that align with those learned by vision models, it could enable more seamless integration and transfer of knowledge between these modalities. A language model that "speaks the same spatial language" as a vision model could, for example, more effectively guide a robot's navigation based on visual input.

As language models scale up, their spatial representations may converge towards a shared, foundational model of space and physics. This could lead to more consistent and reliable spatial reasoning across different language models and applications.

Language models may indeed be learning to capture and reason about spatial concepts through exposure to language. As these models continue to scale up and converge, they may become increasingly capable spatial reasoners.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.