Learning Is Human Work
We are moving faster — but maybe in the wrong direction. In the age of Artificial Intelligence, as technologies promise greater efficiency and precision, we are quietly dismantling the way humans have always learned: by working alongside those with more experience, practicing, making mistakes and trying again. “You don’t become an actor by watching movies,” warns Matt Beane, a leading expert on workplace learning and author of The Skill Code (published in Italy by Egea). A faculty member at UC Santa Barbara, Beane argues that we are not just losing skills — we are losing the ability to learn itself. And he offers a sharp reminder: “The future belongs not to those who can work fastest alongside AI, but to those who can learn fastest with each other.” His call is urgent: we must redesign our technologies and institutions before efficiency erodes what makes human intelligence thrive.
Professor Beane, what sparked your decision to write The Skill Code? Was there a specific moment, story or experience that made you realize we're losing something fundamental in how we learn?
The turning point came during my fieldwork in robotic surgery operating rooms. I watched Kristen, a talented surgical resident, struggle helplessly as her attending surgeon operated a thousand-pound robot from fifteen feet away. She was relegated to watching, essentially becoming a spectator in her own training. Then I met Beth, another resident in the same program who was thriving. The difference wasn't talent or background; it was that Beth had figured out how to learn despite the system, not because of it.
That contrast haunted me. Here was cutting-edge technology that promised better outcomes for patients, but it was quietly dismantling one of humanity's oldest and most effective learning mechanisms: the expert-novice bond. I realized we were facing a massive, largely invisible crisis in skill development that would affect every profession touched by intelligent technology.
Your book begins with a vivid scene — a tinsmith and his apprentice — and later moves through robotic surgery rooms and e-commerce warehouses. What do these seemingly unrelated worlds have in common?
They all depend on the same fundamental learning architecture that humans have relied on for millennia: novices working alongside experts, gradually taking on more complex challenges in a relationship built on trust, respect and care. Whether you're learning to shape metal, perform surgery or optimize warehouse operations, the core pattern is identical.
What's fascinating — and alarming — is how intelligent technologies disrupt this pattern in remarkably similar ways across completely different domains. The robot in surgery, the AI in law firms, the algorithms in warehouses — they all create the same problem: they make experts so efficient that novices get pushed to the sidelines. The tinsmith's apprentice gets hands-on practice; the surgical resident watches from across the room.
You identify three essential "building blocks" of skill development: challenge, complexity and connection. Which of these do you think is most at risk in today's workplaces?
Connection is definitely the most vulnerable, and that's what makes the current situation so dangerous. Challenge and complexity can sometimes be engineered back into work, but connection — the human bond between expert and novice — is incredibly fragile and hard to rebuild once it's broken. When an expert can accomplish their work faster and more efficiently with AI assistance, the natural incentive is to do exactly that. Why slow down to involve a struggling novice when the algorithm never makes mistakes and works at superhuman speed? The expert's productivity soars, but the novice becomes invisible. Without that connection, there's no one to provide the scaffolding that makes challenge and complexity productive rather than overwhelming.
How exactly are intelligent technologies disrupting the transmission of skills between experts and novices, often in subtle, unnoticed ways?
The disruption is so subtle because it doesn't feel like a loss — it feels like pure gain. A senior lawyer reviews documents 10x faster with AI assistance. A surgeon operates with unprecedented precision using a robot. A banker analyzes markets with algorithmic tools that junior staff could never match. But here's what we miss: in the old system, that junior lawyer gained expertise by helping with document review. The surgical resident learned by handling increasingly complex parts of operations. The junior banker developed judgment by working through market analysis alongside their mentor. When intelligent technology makes the expert self-sufficient, these learning opportunities evaporate. The cruel irony is that everyone involved — experts, organizations, even the novices themselves — often sees this as progress. The work gets done faster and better, costs go down and efficiency metrics improve. But we're systematically eliminating the learning pathway that created those experts in the first place.
Is there a particular example from your field research that captures this disruption clearly? I'm thinking of the story of Kristen, the surgical resident.
Kristen's story perfectly captures this hidden tragedy. She's brilliant, hardworking, from a top medical school — everything you'd want in a surgeon. But when she encounters robotic surgery, the technology makes her attending so capable that there's literally no room for her to learn. She spends four-hour procedures watching from the sidelines, maybe getting fifteen minutes of low-stakes cutting time while her mentor barks corrections across the room. When Kristen finally operates independently, the results are devastating: what should take three hours takes seven, patients lose ten times more blood and everyone in the OR is tense. As her chief of surgery told me with brutal honesty: "These guys can't do it. They haven't had any experience doing it. They watched it happen. Watching a movie doesn't make you an actor."
That quote has stayed with me because it captures the fundamental delusion we're living under: that observation equals learning, that efficiency equals progress, that technology inherently makes us better.
One of the most compelling parts of the book is your discussion of "shadow learners" — people who manage to learn despite institutional barriers. What can we learn from these deviant figures?
Shadow learners are our canaries in the coal mine: they show us both the extent of the problem and the path forward. They've figured out how to restore challenge, complexity and connection in environments that systematically eliminate these elements. Take Beth, the surgical resident who thrived. She didn't accept the formal training pathway. Instead, she cut anatomy labs to spend time in actual operating rooms, landed research roles that gave her hands-on robot experience and spent hundreds of hours analyzing surgical videos when she should have been sleeping. By the time she entered formal residency, she looked competent enough that attendings trusted her with real responsibility. What shadow learners teach us is that the three Cs — challenge, complexity, connection — are more fundamental than any particular institutional arrangement. When formal systems fail, determined individuals will find underground ways to access these essential elements of learning. Their tactics give us a blueprint for designing better systems.
In a way, Beth — the surgical trainee who thrived by breaking the rules — is a heroic figure, but also a warning. Should we really depend on exceptions to fix systemic training failures?
Absolutely not, that's exactly the trap we need to avoid. Beth's success is inspiring, but it's also profoundly unjust. She succeeded through a combination of exceptional determination, rule-breaking that could have ended her career, and frankly, luck. One in eight residents in her program managed similar success. What about the other seven? Shadow learning solutions are "semi-ethical hacks that wouldn't scale," as I put it in the book. Beth's tactics strained the bounds of propriety, required enormous personal risk, and operated in isolation from official channels. Imagine if she could have been open about her learning strategy, if attendings could have properly guided her rule-breaking, if institutions could have learned from her innovations.
The real solution isn't to celebrate individual heroics — it's to systematically redesign our institutions and technologies to support the kind of learning that shadow learners fight so hard to achieve. We need to democratize access to effective learning, not depend on a few exceptional individuals to overcome systemic failures.
You point out that most training investments still go into formal education, rather than cultivating expert-novice relationships. What should companies do differently to reverse this trend?
Organizations need to flip their entire perspective on learning from an expense to be minimized to an investment to be maximized. Right now, most companies see novice involvement as inefficiency: why have a junior person slow down the expert when AI can help them work faster? The answer is to start measuring and rewarding skill transmission alongside productivity. Imagine if expert performance reviews included how effectively they developed novices. If project timelines built in learning objectives. If technology implementations were evaluated not just on efficiency gains, but on their impact on capability building. Practically, this means creating what I call "learning-rich" work arrangements: pairing experts with novices on challenging projects, designing AI tools that enhance rather than replace human collaboration and building career advancement systems that recognize mentoring excellence. Some companies are already experimenting with "reverse mentoring" programs where junior employees teach seniors about new technologies while learning domain expertise in return.
Have you come across any inspiring examples of AI or robotics being used not to replace skills, but to enhance their development?
Yes! One of my favorite examples comes from bomb disposal robots versus surgical robots. Both are sophisticated technologies, but they've evolved in completely different directions for skill development. Bomb disposal robots remain deliberately "clunky": they require human skill, judgment and experience to operate effectively. A novice can't just jump in and defuse bombs; they need extensive mentoring from experts who work alongside them, building capability gradually. The technology amplifies human skill rather than replacing it. In my current research, we're developing AI systems that help surgical residents learn faster by intelligently curating and organizing surgical videos, allowing experts to give assignments and feedback in new ways. Instead of replacing the expert-novice bond, the AI strengthens it by providing new channels for challenge, complexity and connection. The key insight is that we can design technology to require and develop human skill rather than eliminate it. But this requires intentional choices about how we build and deploy these systems.