The Competence Crisis That Wasn't
Why we insist on mourning skills we never wanted to keep
In 1907, lamplighters in New York walked off the job, leaving 25,000 gas lamps dark. In Belgium, their counterparts took a more direct approach: they smashed the new electric bulbs. Picture grown men in the dark, swinging at glass, protecting the sacred art of making fire happen from a technology that made fire irrelevant.
They knew, with the certainty of existential dread dressed up as principle, that humanity was about to lose something essential.
The lamplighters were right about the loss. They were just wrong about what was being lost.
This is the pattern we can’t stop running. Every cohort produces confident predictions that the current wave of mechanization will cause a measurable, catastrophic collapse of skill. Every era watches those predictions mostly not come true. And yet every age makes identical forecasts about the next technology anyway.
The question isn’t whether AI will make us stupid. It’s why we need to believe it will. What does our persistent need to invent a competence crisis reveal about us?
The GPS alarm peaked around 2015, when studies began showing that heavy navigation app users performed worse on spatial memory tasks. Headlines proliferated: GPS was literally shrinking our brains.
The research was real. The framing was hysteria.
A meta-analysis of 23 studies failed to find the smoking gun. GPS users show reduced environmental knowledge, but they still find their destinations just fine. People hadn’t lost the ability to get places. They’d stopped maintaining mental maps of routes their phones remembered better.
Call it reallocation, not atrophy. “Atrophy” implies decay. What the research showed was efficiency. The brain didn’t deteriorate; it delegated. It closed the branch office and let headquarters handle the work remotely.
But “your brain is efficiently reallocating cognitive resources” doesn’t sell. It doesn’t let anyone mourn. And we desperately need to mourn.
The calculator terror of the 1970s ran the same playbook. Educators were certain catastrophe loomed. Stephen Willoughby, then president of the National Council of Teachers of Mathematics, warned that students would “forget how to think on their own or solve simple problems without their pocket calculator.” Teachers described the devices as “devilish little desk-goblins.”
Devilish little desk-goblins. That isn’t concern about numeracy. That’s exorcism language. That’s adults confronting small machines that could do in seconds what they’d spent years learning to do slowly, and reaching for the vocabulary of spiritual warfare.
What actually happened contradicted the panic entirely. A meta-analysis spanning decades found “no evidence” that calculators eroded mathematical thinking. Students using them performed as well or better on conceptual problems, even when the devices were taken away. The tools hadn’t replaced thinking. They’d changed what thinking was for.
The desk-goblins turned out to be benign. The adults calling them devilish were mourning their own irrelevance, terrified that a pocket chip could invalidate a lifetime of effort.
Consider who profits from the competence-collapse narrative. It is a remarkably versatile script: tech companies read it as advertisement, schools as a mission statement, commentators as job security.
Tech companies, obviously. Every warning that AI will make workers obsolete doubles as a marketing flex for AI’s power. Fear sells subscription seats on the Titanic.
Meanwhile, Microsoft researchers publish studies warning that AI tools “can and do result in the deterioration of cognitive faculties that ought to be preserved.” Which faculties, exactly? Preserved according to whom? One imagines the committee meetings. Executives in glass rooms debating which cognitive capacities require preservation for human flourishing. Then breaking for lunch to strategize selling the same tools to everyone who’ll pay.
The Victorians who worried train travel would cause women’s uteruses to fly out brought similar confidence to similarly unfounded claims. They knew which physical capacities required preservation. History remembers them accordingly.
Educational institutions benefit too. If technology threatens to make expertise irrelevant, schools become the last defenders of essential human capacities. These capacities, conveniently, require expensive tuition to transmit. The discourse flatters institutions as guardians of cognition rather than credentialing factories running a relevance crisis of their own.
None of this means the concerns are fake. It means they are useful. We should be suspicious about whether we’re examining a real phenomenon or performing a ritual that serves other purposes.
The logical conclusion arrives in 2034. The Cognitive Preservation Act passes Congress. Citizens are required to complete eight hours of manual calculation per month, verified by blockchain. GPS-free navigation zones span downtown; tourists wander lost, proudly, certificates of authentic wayfinding clutched in sweaty hands. “Unassisted Thinking” becomes a luxury brand. Wellness retreats offer “heritage cognition” packages where the wealthy solve long division by candlelight, then journal about the experience without autocorrect.
This sounds like satire. Follow the current discourse far enough, and you arrive at the same destination with complete bureaucratic sincerity. Every unease, given enough time, produces its own bureaucracy.
We are the era now. We are the ones watching tools absorb the things we learned to do, reading studies about cognitive offloading while offloading the reading to summary apps, worrying about attention spans in eight-second intervals between notifications.
We offload cognition. We’ve been doing it since we realized we could write things down instead of remembering them. The extended mind isn’t a theory. It’s a description of how we’ve always worked: a cognitive system that includes notebooks, smartphones, and AI assistants as functional components of thought itself.
The mind is a city, not a building. The self doesn’t end at the skull any more than a city ends at its original walls. It sprawls, incorporates, annexes. The question isn’t whether tools affect cognition. Of course they do. Writing changed how we remember. Photography changed how we perceive. The question is whether “cognition that now includes machines” is still yours.
When you think through an AI assistant, who is thinking? The extended mind thesis isn’t abstraction; it’s asking whether the self has edges, and if so, whether those edges are dissolving like shorelines into rising water. Not suddenly, but in ways you only notice when you look for where they used to be.
The honest answer requires acknowledging that sometimes the warnings are correct.
Commercial pilots provide the counterexample nobody wants to think about. In surveys, 77 percent of airline pilots report that their manual flying abilities have deteriorated due to automation. Only 7 percent believe their capabilities have improved.
The FAA found the data concerning enough to issue guidance urging airlines to let pilots hand-fly more often. Guidance. Not mandates.
When Air France Flight 447 fell from the sky in 2009, part of the disaster stemmed from pilots who had spent too little time manually controlling aircraft at high altitude. Their capacity for flying a plane in crisis didn’t reallocate. It degraded. And unlike forgetting how to fold a paper map, this degradation kills people.
So the pattern isn’t absolute. Talents that require ongoing physical practice and split-second embodied judgment can genuinely atrophy when automated systems take over. The question becomes: which proficiencies matter enough to protect, and who decides?
The cartographer offers a different answer. In 1990, cartography meant drafting tables, specialized inks, physical mastery of projection math. By 2024, cartographers work in Geographic Information Systems. They code. They integrate LiDAR data. They create interactive digital maps that would have been science fiction to their drafting-table predecessors.
The profession didn’t die. It underwent the “reinstatement effect”: digitization displaces certain tasks while creating new ones where humans have comparative advantage. Research suggests about half of all employment growth between 1980 and 2015 occurred in occupations where job titles or core tasks had fundamentally changed.
The mechanism is straightforward: what we’ve trained ourselves to do doesn’t vanish because humans can’t retain it. Skills don’t disappear. They migrate to where they’re still useful. The net calculation is unanswerable, unless you already know what you value and why.
There is something touching about our species-wide inability to learn this pattern. Every cohort arrives at technological change convinced that this time the warnings are real. This time the tools will hollow us out.
And then our children grow up neither hollow nor diminished, but different. They lack faculties we valued, possess capabilities we can’t name, and look at our anxieties with the polite incomprehension of people who have never known a world without the tools we fear.
But here’s what we don’t say: it’s not their mastery we’re worried about. It’s ours.
Each era’s competence alarm performs a function. It names a loss, articulates a fear of obsolescence, and allows us to mourn versions of ourselves that mechanization is making unnecessary. The lamplighter smashing electric bulbs wasn’t wrong that something was ending. He was wrong about what that something was.
What’s ending is a particular configuration of human value. We’re not watching skills die; we’re watching definitions molt, leaving behind shapes that no longer fit who we’ve become.
The new configuration may be valuable, but it won’t be familiar, and humans are creatures who mistake the familiar for the essential.
The AI apprehension will follow the same arc. The average human in 2035 will navigate capably using tools that would alarm us today. And in 2045, a new technology will arrive, and the 2035 humans will warn that this time, finally, the warnings are real.
They probably won’t be. But we’ll keep making them, because the warnings aren’t really about abilities.
The lamplighters knew how to make light. That knowledge didn’t disappear when the profession did. It just stopped mattering in the same way. That’s what we’re actually afraid of.
Not that we’ll forget how to think. But that thinking (as we’ve understood it, as we’ve practiced it, as we’ve built our identities around doing it) will stop mattering in the same way.
We mourn imaginary capacity losses to avoid facing the real one. We warn about what our children won’t remember because we can’t admit what we’re afraid of: that our particular way of thinking has simply ceased to be a requirement.
The competence crisis that wasn’t keeps recurring because the identity crisis that is never gets named. This isn’t a story about cognitive decay. It’s a story about value.
And unlike capacity atrophy, no meta-analysis will tell you whether to grieve that or get over it.
Research Notes: The Competence Crisis That Wasn't
Every time a new technology arrives, someone predicts it will make us stupid. GPS will destroy navigation. Calculators will ruin math. Spell-check will eliminate literacy. AI will atrophy critical thinking. The specifics change. The structure doesn’t. Why do we keep running this script?








