Key Points:
- Our early predictions of AI transforming work through language, reasoning, and perception advances are coming to fruition with large language models.
- But the interplay between human and AI capabilities remains fluid and unpredictable. Tasks, not full jobs, are increasingly delegated to AI as humans adapt.
- With cognitive tasks being outsourced to AI, human skills become central—determining which are needed and can complement AI's strengths.
- However, opaque boundaries between AI and human expertise risk amplifying automation bias if capabilities are overestimated.
- Workers are already independently adopting AI tools like ChatGPT for productivity gains, especially novice users. But utilizing AI well requires nuanced mastery.
- Productivity metrics alone miss the point. Issues like demand saturation and distribution of gains matter hugely.
- Humans and AI form a complex adaptive system. Realizing benefits depends on continuous innovation and investment, not just automation.
- Multiple human roles exist in shaping AI's trajectory, like entrepreneurs devising novel applications. But benefits require humans to actively participate, not be pushed to the sidelines.
- The future remains unwritten. With vision, AI integration can unlock new possibilities that amplify shared potential. But only if human ingenuity remains integral, not minimized.
Back in 2016, we published pioneering research on Machine Employees predicting AI's impact on work. We foresaw that breakthroughs in language, reasoning, and perception would bring a new era of human-machine collaboration.
Today, with large language models' impressive capabilities, that future has arrived. But clarity about automation opportunities remains elusive. It is still devilishly difficult to predict how technology will impact work. Humans still outperform machines in ambiguous and unpredictable situations but the frontiers of creativity and reasoning are highly fluid. Instead of automating full occupations, specific tasks are outsourced to AI. And humans adapt in unexpected and sometimes weird ways.
Our predictions proved prescient, yet the nuances still surprise us. AI and human capabilities progress in a fluid, reciprocal manner. As machines absorb discrete tasks, we see new opportunities to combine complementary strengths. Traditional AI enabled data-driven decision making while generative AI will enable a huge increase in AI-driven combinatorial ideation. Traditional AI led to debates focused on automation versus augmentation. But in the era of generative AI, skills become central—which capabilities are needed, why, and to what level given the goals of the system.
As machines prove capable at cognitive tasks, paradoxically it becomes more difficult for the human to discern the machine’s capabilities. The ever-changing, context-dependent, opaque boundaries between AI capabilities and uniquely human skills present a new risk: amplification of automation bias. Garbage creativity and fake advisories that cause harm before anyone notices at scale across the entire company.
Workers are simultaneously adopting and adapting generative AI, often without their managers knowing they have a whole new army of machine employees. Employees who are ungoverned by company policy or brand norms. Many utilize tools like ChatGPT secretly, whether to gain an edge or avoid reprisal. It’s easy to see why: these tools are just so darn useful. Early research quantifies astonishing productivity gains, especially for less experienced staff. Novices improve up to 40% more than veterans when aided by AI.
However, mastering when and how to apply generative technology remains pivotal yet perplexing. People don't apply generative AI tools uniformly. Users adapt their application of large language models based on the task complexity, their own skills, and evolving comprehension. For instance, a financial analyst may rely heavily on an LLM's summary of a quarterly earnings report for a company she knows well, but take its output with greater skepticism when assessing an unfamiliar one.
Just handing your employees a shiny new AI assistant won't automatically make them more productive. The real skill is knowing how and when to apply these tools effectively to different tasks. If someone uses the wrong strategy with their AI partner, their performance actually drops to below what it would be if they didn't use the tool at all. And more perplexingly still, a little bit of training makes this outcome more likely. When AI is tightly embedded in workflows, its capabilities are restricted and predictable. But adopting untethered AI tools necessitates tolerance for some degree of experimentation and missteps. Rather than viewing errors as failures, leaders should frame them as opportunities for growth while setting appropriate limits. What types of mistakes are manageable versus high-risk? How much trial and error leads to mastery versus frustration?
We don’t know enough about what it takes to master these tools so we don’t know enough about how to design them. The tech is moving so fast that researchers are still unraveling what optimal collaboration looks like as tasks and tools evolve.
And here’s the real problem: an obsession with productivity misses the point of the promise of generative AI. As Nobel laureate Robert Solow famously quipped, you can see the computer age everywhere but in the productivity numbers. Generative AI impacts the supply and demand of creative output and it is possible that demand for some human skills will become saturated by AI, collapsing the price for those skills and causing significant displacement. Responsible AI adoption includes a choice about labor versus capital, about people versus machines, about how the benefits of technology are shared. Distribution of gains matters as much as their magnitude.
Machines and humans are a complex system, a field where people need to be ready to seize emergent opportunities. Past innovations boosted productivity because entrepreneurs envisioned new possibilities, not just direct automation. But realizing this potential depends on innovation and investment in a distributed and self-organizing system.
In complex systems, the concept of emergence matters. For example, the smartphone emerged from combining technologies and transforming the rules of the digital economy. People created an entirely new product category. Everyday use of high-capability generative AI will similarly spur unforeseen innovations if we cultivate the right conditions. What are the right conditions? We can’t know but we do know that it is highly probable that the biggest innovations will happen at the edges of the network, at the lowest levels of the system: with workers in the field, coders in the weeds, scientists embedded in webs of expertise.
Humans play multiple roles in shaping how AI systems evolve. Entrepreneurs identify valuable interventions, like using generative design in manufacturing or legal services. Learners dive into unfamiliar terrain, gaining skills to pioneer AI applications in their fields. The future of work will be a story of humans actively participating, catalyzing progress rather than passively reacting. But this only happens if humans stay involved. Overvaluing automation is the path to stagnation. Hence our ongoing obsession with work and machines.