Collective Intelligence

As problems become more complex and interconnected, collective intelligence must evolve to keep pace, likely requiring AI systems as indispensable partners. 7 min read.

Abstract image of people sitting in circle

Key Points:

  • Generative AI can aid human exploration of unintuitive solution spaces, but must be designed to facilitate cooperative social problem solving.
  • Enhanced individual intelligence through AI does not guarantee collective wisdom—in fact, it may hinder it through polarization and loyalty to individual perspectives.
  • AI could help groups synthesize diverse views and map dynamics, acting as a neutral facilitator and simulator, but care must be taken not to stifle human agency.
  • Privacy is a concern as individual control diminishes—group benefits may impose individual costs. Concentrating power in opaque AI systems challenges realizing true collective intelligence.
  • Balance is critical between structure and self-organization, memory and exploration, digital acceleration and tangible action. Diversity must be embraced without losing the ability to synthesize.
  • As AI approaches human reasoning, "rooted intelligence" like nuanced understanding of contexts, cultures, and relationships becomes irreplaceably human. Only humans are accountable.
  • Social solutions require designing systems that foster meaningful connections and human accountability. AI's limitations in applying knowledge must be considered.
  • Trust and empathy become vital in discovering diverse perspectives. The goal is enabling groups to both challenge and belong through a mind for our minds.

Problems are growing more complex, demanding greater collective intelligence. The internet has interconnected us, fostering networks of influence and amplification. We grapple with machines and AI systems, seeking ways to harness their power for enhanced cognition and productivity.

In Big Mind, Geoff Mulgan argues that a comprehensive theory of collective intelligence—one which includes machines—must tackle the multidimensionality of our choices. As choices encompass increasingly diverse perspectives, varying influence, potential conflicts, and timing considerations, our approach to problem-solving must evolve in lock-step with the agency of machines and the complexity of global problems. In fact, it’s likely that machines are indispensable to solving our biggest problems, so where do we put them to work first?

AI excels at low-dimensional choices, and Mulgan suggests it can tackle more complex choices as it advances. Generative AI already aids humans in exploring vast, unintuitive solution spaces in novel, and frankly, alien ways. But now it must be taught to share and to teach humans to do the same. Our large and expensive brains evolved to be social problem solvers so we will have to design our AI to account for our paradoxical social solving—sometimes cooperative and sometimes competitive. By providing multiple interpretations and perspectives, AI could facilitate group problem-solving but we will have to be imaginative in our designs.

Increased individual intelligence doesn't guarantee enhanced collective intelligence. AI could inadvertently foster dynamics that hinder collective wisdom, as evidenced by algorithm-driven polarization and filter bubbles. We must ask: How can we ensure AI doesn't make us individually smarter, yet collectively stupid?

How can we ensure AI doesn't make us individually smarter, yet collectively stupid?

For a start, AI assistants shouldn't be excessively loyal. A personal AI, like ChatGPT, might amplify our biases and scale up flawed ideas, promoting information cascades in groups. It could be overly protective and shelter individuals from feedback and criticism in self-serving ways. Groups of individuals who are all being individually optimized by their own overly-loyal AI will likely fail to see value in the collective.  

AI can assist in synthesizing diverse perspectives in complex group decisions. By facilitating thesis-antithesis debates and mapping power dynamics, AI can help humans avoid overlooking crucial factors. It can act as a neutral decider and future simulator, empowering the group without stifling it.

If AI presents the "correct" decision and we accept it, do we maintain our roles as active decision-makers or devolve into machine-enabled social loafers? On the other hand, if AI is too didactic and its design overly complete, might our meta-agency drive us to reject optimal solutions simply because we can? Mulgan posits that the most effective designs incorporate a degree of incompleteness or self-effacement, as humans excel amidst imperfection, noise, and randomness. Error and imperfection plays a critical role in progress.

Human evolution has conditioned us to navigate challenges together. While we may desire easy decisions, we flourish through desirable difficulty. Will AI emulate the way we treat captive tigers—offering cognitively enriching toys that conceal essential nourishment? Tackling tough challenges together is life-affirming. This fact about the human collective is perhaps the most optimistic source of hope. In the absence (hopefully) of an AI-enabled oppressive state, we will always strive to work together.

AI can help address biases both individually and in groups, but privacy concerns arise in group settings. Individual control over AI's insights diminishes, turning privacy choices into surveillance decisions. Group benefits can indeed come at individual costs, reflecting the inherent trade-offs in society. While statistical discrimination may benefit many, it can harm a few. The crux lies in AI's potential to concentrate power among the few, negating benefits to the majority. If AI remains opaque, non-public, inscrutable, and unaccountable, it's challenging to envision how it can genuinely enhance collective intelligence without merely serving as a tool for a technocratic elite. Cynically, one might wonder if true collective wisdom can ever be attained under such conditions.

In the dynamics of collective intelligence, balance is key. Too much value placed on memory may impede exploration, while a singular focus on coherence might stop us challenging the status quo. Digital creativity is effortless and at our fingertips in seconds, but we must not forget the importance of tangible, real world, action in our learning process. AI, in its dual role, both constrains and liberates—exposing biases and supporting equitable reasoning, yet possibly surveilling conversations and muting certain voices. The challenge lies in designing AI that embraces diversity and inclusion without collapsing under the weight of multitudes. Decision makers in diverse organizations know this—it is difficult to strike the balance between gaining multiple perspectives and indecision through the inability to synthesize disparate values.

Recent research in collective intelligence offers insights on the interplay between networks, diversity, and decision-making. While these elements excel at information gathering, they may falter in decision-making. Self-organization appears attractive, but beneath its façade, humans need structure to land a multidimensional decision. Can AI guide us in higher conceptual spaces? If wisdom is the goal, AI’s perspective may be as valuable as anyone else's.

A prevalent paradigm in human cognition is our inclination towards binary thinking—either/or, us/them. Enhanced intelligence emerges when individuals and groups embrace multiple or competing perspectives simultaneously. This concept has long been acknowledged by philosophers, scientists, and the creative arts. Dialectical reasoning, however, can be challenging and may feel unfamiliar. Often, we need to construct conditions that compel us to entertain alternatives and uncertainty. The scientific method, arguably our most significant collective invention, does just that: demanding alternative views meet rigorous standards and providing peers to actually do the hard work of radical challenge.

AI could potentially help us develop an improved version, a "scientific method V2," less susceptible to manipulations like p-hacking and detrimental incentives such as publication bias. AI's capacity for rapid acceleration and expansion might fundamentally transform the way we evaluate evidence. Is there a potential design paradigm for AI that helps us reconceptualize collective fact?

As AI approaches human-like reasoning, what unique human intelligence will we cherish? Mulgan highlights that the dazzling surge in machine intelligence has predominantly been context-free. In this brave new world, which human qualities will emerge as irreplaceable? Day-by-day, AI erodes our protections on what constitutes "uniquely human." Imagination, creativity, empathy, reasoning: these are all capacities that are measurably developing in large, multi-modal models, ones that only a few years ago were considered to be in the sole domain of humans for the foreseeable future. We now have a base-level human-like artificial intelligence that people are excited to integrate into every thing they do. Collectives need an equivalent.

Even with its vast textual and visual knowledge, AI may still struggle to apply that information to our immediate situations or interactions. AI can’t know how to apply it to the situation you grapple with right now or this person that stands in front of you today. Part of your knowledge is what it will mean for that person, how a decision may change the nature of your relationship and unique connection. Your knowledge becomes intertwined with your judgment—judgment not given to a machine. How can we use AI to better navigate the tension between offering a different perspective but risking being thrown out of the group?

Mulgan terms this context-sensitive understanding "rooted intelligence," encompassing the nuances of individuals, cultures, histories, and meanings. Rooted intelligence becomes enmeshed with human connection. It becomes a kind of contract for how we choose to treat each other, our obligations, and our personal reputations. Rooted intelligence is uniquely human intelligence. Only humans can integrate, synthesize, balance, and judge. Only humans are accountable.

To foster collective intelligence, we must design systems that enable meaningful human connections, accounting for the limitations of machine predictions. Trust and empathy become vital in discovering one another's perspectives. Our most pressing problems have social solutions. Can a mind for our minds help us resolve social paradoxes and enable us to both challenge and belong?

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.