A crucial tension exists between automating systems to not require human intervention, like self-driving cars, and the need for human expertise when AI fails unexpectedly.
Key Points:
- Generative AI enables non-experts to achieve specialist-level work like programming, appearing to threaten the value of expertise.
- However, human strengths remain vital when AI falls short, presenting an inherent paradox between developing and requiring expertise.
- AI separates prediction from judgment, struggles adapting, increases decision complexity, and alters learning - amplifying ultimate need for human ingenuity.
- Examples across aviation, medicine, and driving reveal paradoxical risks of AI degrading expertise while making it more invaluable than ever when technology stumbles.
- Updating notions of intelligence in the AI age means valuing emotional aptitudes like ethics and hope alongside analytical skills.
In late 2018, protein folding researcher Mohammed AlQuraishi experienced an emotional rollercoaster. He was awestruck yet disheartened when DeepMind's AlphaFold outperformed human scientists at predicting protein structures. Despite appreciating this breakthrough, AlQuraishi couldn't escape a nagging feeling—was machine knowledge eclipsing human ingenuity?
He observed a shift in what defined a revered scientist. Mastery of traditional discovery and "Eureka!" moments were being superseded by skills manipulating data to extract machine-generated insights. Expertise now required fluency in both core disciplines and AI techniques to uncover new knowledge.
This transformation prompts an uneasy question: What happens when developing expertise depends on AI that could simultaneously hinder its advancement? Aviation reveals these tensions between human mastery and automated systems.
In 2008, Captain Kevin Sullivan's flight from Singapore violently plummeted when its computer malfunctioned. Despite injuring passengers, Sullivan averted disaster by instinctively intervening to control the aircraft manually. His extensive piloting experience equipped him to act when automation failed.
But such expertise is increasingly rare as pilots rely on sophisticated fly-by-wire systems. Regulations prohibit hand flying at cruising altitudes, depriving pilots of critical tactile experience. Scripted training focuses on expected scenarios, yet unpredictable anomalies bring down planes. While technology enables safety improvements, over-dependence leaves humans unprepared to act in the gaps.
This paradox intensifies as society cedes more decisions to AI. How do we balance near-complete automation with the undeniable need for human expertise when systems falter unexpectedly? Can expertise survive in a world designed to eliminate the need for it?
The days of prizing expertise in rare, high-pressure situations appear numbered. Take the high-stakes integration of self-driving cars. Successfully adopting autonomous vehicles demands designing systems where human intervention is never necessary.
Now consider how generative AI is empowering non-experts. Large language models help novices build websites as fast as expert coders. When technology enables amateurs to perform specialist work, why pay a premium for expertise?
But it's not so simple. Coders still hit roadblocks requiring expert assistance. And automation frees experts to focus on big picture goals like addressing bias in data or user needs. As the nature of programming evolves, different human strengths may be valued. And what we value will be a choice.
The tension and fluidity between developing expertise and dependence on AI that may hinder skill development poses an uneasy paradox. On one hand, one worries that increased automation could devalue human mastery. But paradoxically, as AI threatens expertise, it amplifies its ultimate necessity.
For example, AI separates outcome prediction from contextual judgment—a distinctly human skill (not to mention, a human accountability). AI struggles— adapting to new environments in unforeseeable ways. It adds complexity to decision processes and alters expectations of humans when it fails. And it fundamentally changes learning itself.
As AI permeates medical diagnosis, doctors may deskill in detecting subtle symptoms, instead relying on algorithmic triage. Yet when technology stumbles upon atypical cases, human expertise becomes critical. AI cannot replicate physicians' tacit knowledge gleaned from years of encounters, improvisation, and handling of critical uncertainty. Herein lies the paradox—as AI gains ground, human judgment grows ever more invaluable.
Or consider how GPS navigation shapes drivers' spatial reasoning. Studies show reliance on algorithms erodes intuitive orientation abilities over time. But what happens when systems glitch and suddenly spatial skills, not map apps, are key? The perils of our collective loss of expertise emerge. Yet driving still demands human ingenuity to handle unpredictability. So mastery retains ultimate value despite technology threatening how it advances in the first place.
This paradoxical tension between developing expertise in the AI age and the way AI shapes expertise recurs across contexts. Aviation, medicine, driving, law—AI propels breakthroughs yet risks diminishing hard-won human proficiency. Like any paradox, resolving it requires shifting it into a different frame altogether—the collective expertise of humans, teams, and machines.
Collective skill lies in harmonizing strengths—harnessing AI's expansive power while cultivating distinctly human gifts like resourcefulness, ethics and accountability. If framed as a collaboration, humans and AI can advance expertise together rather than in competition. Not that this will be easy, fast, or intuitive.
It demands updating notions of intelligence as technology shifts the goalposts for us. And collective progress depends on valuing both analytical and emotional aptitudes. Wisdom entails judgment, not just computation. Our expertise was never confined to raw smarts—it encompasses hope, courage and conscience. With vision, we can guide technology as an elevating force. The future, as always, reflects the choices we make today.