AI is changing the traditional apprenticeship mode, altering how we learn and develop skills across industries. This is creating a tension between short-term performance gains and long-term skill development.
Dr. Matt Beane, an assistant professor at UC Santa Barbara and author of "The Skill Code," has studied this change. His research shows that while AI can significantly boost productivity it may be undermining critical aspects of skill development. Much of Beane’s work has been observing the relationship between junior and senior surgeons in the operating theater. "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship," he told us.
"In robotic surgery, the junior surgeon now sets up a robot, attaches it to a patient then heads to a control console to sit there for four hours and watch the senior surgeon operate." This scenario, repeated in hospitals worldwide, epitomizes a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices. See one, do one, teach one, is becoming See one, and if-you're-lucky do one, and not-on-your-life teach one, Beane writes.
Beane argues that three key elements are essential for developing expertise: challenge, complexity, and connection. "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?" Struggle matters in the learning process. Struggle doesn’t just build skills, it builds trust and respect with others, which is a critical aspect of how the entire system of human expertise works.
As AI makes information more accessible, some experts, like David Blake, CEO of Degreed, argue that the challenge isn't just about acquiring knowledge, but about intentionally focusing our learning efforts. Blakes’ research suggests the half-life of skills has collapsed from 17 years to just 2.5 years which drives the need for a more deliberate approach to skill development, even as AI tools make information more readily available. “Now that information and education are abundant, accessible, and free, curiosity can lead to distraction. What correlates most with positive learning outcomes in adult lifelong learners now is intentionality," he told us.
To learn effectively, one must accept some degree of inefficiency, risk, and temporary decline in performance. With AI, individuals can achieve significant gains, making it hard to ignore its benefits. However, this often comes with a reduction in the learning rate and overall capability of the system. How we balance these tradeoffs—at school, university, and at work—is a new kind of challenge.
A recent study conducted in a high school setting offers evidence of this tension. The research, led by Hamsa Bastani and colleagues, found that while access to GPT-4 significantly improved students' performance on practice problems (by 48% for a basic GPT interface and 127% for a tailored tutoring interface), it actually harmed learning outcomes when the basic AI assistance was removed.
This finding aligns with Beane's observations about the potential pitfalls of AI in skill development. Using AI straight out of the box reduces the struggle in the work, which eliminates the challenge. This ease means fewer encounters with the incidental parts of the work that you'd otherwise learn about through inefficiency, diminishing the complexity. And it reduces the opportunities or necessity for connecting with other people.
This tension between performance and learning creates perverse incentives across various industries. In medicine, for instance, a surgeon might achieve better immediate results using AI-assisted tools, but at the cost of developing the deep expertise needed to handle complex, unforeseen situations. In software development, junior programmers might rely heavily on AI coding assistants to boost their productivity, but miss out on developing a fundamental understanding of programming principles.
Beane highlights this systemic issue: "There's a patient under anesthesia, and every minute increases both the hospital's costs and the patient's stroke risk. In this situation, the pressure to be efficient means novices aren't allowed to practice on the patient." This pressure for efficiency and safety can overshadow the need for skill development, creating a system where individual performance improves at the expense of collective, long-term expertise.
The implications of this shift extend far beyond individual careers. As Beane points out,"In this situation, you don't just maintain your skill level. Instead, you can gradually lose skills even as your results improve, because you're relying on AI rather than developing your own expertise."
This de-skilling at a systemic level could lead to a workforce that's highly efficient with AI assistance but lacks the deep expertise to innovate, handle complex problems, or even effectively oversee the AI tools they rely on. The integration of AI into skill development isn't inherently detrimental. The study by Bastani et al. found that when AI tools were designed with specific safeguards to encourage learning—such as providing incremental hints rather than full solutions—the negative effects on learning were largely mitigated.
Beane suggests that the solution lies in reimagining apprenticeship for the AI age. This might involve:
- Designing AI tools that preserve struggle: "We must be cautious when deploying generative AI to ensure humans continue to learn critical skills," Beane emphasizes. This could mean creating AI assistants that challenge users rather than simply providing answers.
- Creating "impossible challenges": Beane encourages using AI to push learners beyond their current abilities. By setting very hard challenges for his graduate students, tasks they couldn't complete without AI assistance, Beane hardwired into the assignment the necessary struggle. "I assumed they didn't really know how to code. And I knew, because I did it myself, that it took two days to code a web app to run Monte Carlo simulations for predicting project durations using statistical simulation." This pushes students to their limits and shows how AI can enhance a learning process by tackling complex tasks they might not manage alone.
- Facilitating human connections: "How could I get connected with a mentor in real time that I would have never possibly even been matched with before by AI itself. We can do that now," Beane proposes, pointing to potential positive applications of AI in reshaping mentorship.
- Systemic redesign: Organizations and educational institutions need to rethink their processes to ensure that AI enhances rather than replaces human skill development. This might involve creating new metrics for evaluating skill progression that go beyond mere task completion.
Beane notes, "We don't seem to be asking the question: what are we giving up for a quick saccharin hit of productivity? This is a serious concern." Innovators need to create systems that use AI's capabilities while still developing human skills and ensure that our pursuit of efficiency doesn't come at the cost of mastery and lasting expertise.