The Sorting
Some People Will Think. Everyone Else Will Perform Thinking. The Sort Is Happening Now.
The University of Edinburgh will graduate its first cohort in 2026 who never wrote an undergraduate essay without AI assistance. Not because they cheated, by second year, nobody was calling it cheating anymore. By third year, the university stopped pretending to care. By finals, the conversation shifted from “how do we detect this” to “how do we assess what we actually value” before dying quietly because nobody could agree on what that was.
Those graduates enter a job market with the same problem. They produce flawless reports, compelling presentations, sophisticated analysis of whatever you put in front of them. What they can’t do is think without the autocomplete. Not because they’re stupid, but because they never built the cognitive infrastructure that turns writing from performance into discovery. They never learned that writing is the tool you use to find out what you think, because the algorithm was always there to tell them what comes next.
This isn’t a crisis. It’s a reveal. The crisis was building the entire knowledge economy on performances we couldn’t distinguish from actual competence, then acting surprised when the performance became mechanically reproducible. We mistook the dress rehearsal for the play.
The systems that credential thinking gave up on measuring it.
The tell appeared in 2023. Teachers mixing student essays with ChatGPT outputs found their rubrics couldn’t distinguish them. This raised a question: what were the rubrics measuring, if not thought? The answer was compliance with arbitrary style conventions, a proxy for thought that was never the real thing. Universities quietly abandoned take-home exams because they’d accidentally admitted those only ever tested pattern matching. The Associated Press ran 3,000 AI-generated earnings reports annually because nobody could defend the cost of a human journalist converting numbers into sentences. This wasn’t an automation success story. It was an autopsy of what business journalism had become.
Detection tools flagged non-native English speakers at catastrophic rates because the algorithm mistook “simple syntax” for “artificial,” revealing that our measures of authentic writing were always proxies for class and access. We built an entire credentialing system on the premise that we could identify thinking by its surface features, then discovered the surface features were the only thing we’d actually been measuring all along.
We spent two decades optimizing education for rubric compliance. Then we fed those rubrics into training sets. Then we acted shocked, shocked! when the models learned to hit every criterion without generating a single original thought. The machines didn’t break the system. They exposed it as cargo cult assessment: elaborate rituals that measured not competence, but who could most convincingly perform the rituals.
What comes next isn’t a crisis either. It’s a market adjustment.
The job application process has always been performance theater. The cover letter is fiction, the resume is strategic omission, the interview is rehearsed authenticity. But there was a theory underneath: the performance, however artificial, demonstrated something. Competence. Work ethic. Cultural fit. The ability to bullshit convincingly, which, for most knowledge work, was the job description.
AI collapses the bullshit premium. Every applicant now has access to the perfect performance. The cover letter that hits every keyword. The interview prep that anticipates every behavioral question. The take-home assignment that demonstrates exactly the level of sophistication the rubric requires, no more and no less.
Hiring managers are discovering they can’t tell who can do the job because the audition was never a test of ability. It was a test of performance: who could act competent enough to get hired, then learn the actual job later. This is how most white-collar work has always functioned; we just had the decency not to say it out loud.
Some companies will pivot to “authentic assessment”: live problem-solving, work trials, proof of execution. These will select for people who can think under pressure without the algorithm. They’ll also select for people with the class background that taught them how to perform confidence in high-stakes scenarios, which maps almost perfectly onto existing privilege. The meritocracy will discover it was always a performance anyway, just one that required expensive coaching.
Other companies won’t pivot. They’ll accept the performance economy. Hire the algorithm-augmented candidates because the job is algorithm-augmented anyway. Why pay for authentic thinking when the work product is indistinguishable from what the AI produces and ships faster? This creates a bifurcation: jobs where thinking matters versus jobs where thinking was always just expensive performance. Guess which category includes most knowledge work?
The class dynamics are clarifying with almost comedic precision.
Elite private schools are quietly shifting pedagogies. Not away from technology, but toward ‘authentic cognition development.’ Think handwritten essays. Socratic seminars. Slow reading of primary texts. Oral defenses. Assessments that force thinking to become visible, not just verifiable. They market it as ‘preparing students for AI collaboration.’ This is technically true, in the same way teaching aristocrats Latin in 1750 prepared them to collaborate with the emerging merchant class. What they’re actually doing is training their students in the one skill the algorithm can’t replicate: thinking from scratch, under constraint, without a net.
Meanwhile, public education is being optimized for efficiency. AI tutoring. Algorithmic grading. Adaptive learning platforms that customize content but standardize outcomes. The pitch is equity, personalized education at scale, finally democratizing access to quality instruction. What’s actually happening is training an entire generation to be extremely good at working with AI systems, which means extremely good at performing competence the algorithm can verify, which means excellent preparation for jobs where human thinking became optional.
This is the 2027 version of tracking, except we can’t call it tracking because that sounds like we’re doing something morally objectionable, so instead we call it “differentiated instruction optimized for individual learning outcomes.” Not “college prep versus vocational.” Not even “learns to think versus learns to perform thinking.” Just personalized pathways that happen to correlate suspiciously well with existing socioeconomic strata.
The kids graduating from Phillips Exeter in 2028 will construct arguments from first principles without touching a keyboard. The kids graduating from underfunded public schools the same year will produce better-formatted essays in a quarter of the time. Both will be optimized for their future roles. One group will make decisions. The other will execute them. We’re just not calling it that because education is supposed to be the great equalizer, and it’s rude to point out that we just built the most efficient stratification mechanism in a century and marketed it as personalization.
If this sounds familiar, it’s because we’ve seen this movie before. Victorian England gave working-class children enough literacy to read factory instructions but not enough to read philosophy. We’re just doing it with better technology and more therapeutic language about meeting learners where they are.
The cognitive development question sits at the center of this, patient and unavoidable: what happens when a generation never develops the neural infrastructure for sustained abstract reasoning without autocomplete?
Writing is cognitive scaffolding. The struggle of converting thought to language, getting stuck, working through that stuck place, discovering what you actually think in the process, that’s not just practice. It’s how the brain builds new connections. The struggle is the mechanism. That moment when you don’t know what comes next and must figure it out without prediction is when actual cognitive development happens.
Remove the struggle, replace it with autocomplete, and you might be doing something analogous to what happens when children who never crawl, just go straight from sitting to walking with a baby walker, sometimes develop motor planning difficulties later. The walker gets them upright faster. It just turns out that crawling was building neural maps they’d need for more complex movement, and you can’t go back and install that infrastructure after the developmental window closes.
We’re running the experiment right now. The results won’t be measurable until 2028 at the earliest, when the first fully AI-integrated cohort hits the job market and we discover what they can’t do. Not what they don’t know: what cognitive operations they never developed the capacity for because the algorithm was always cheaper than the struggle.
The optimistic scenario: they’ll be fine, just different. Augmented thinking replaces independent thinking. Human-AI collaboration becomes the new baseline. This is probably what calculators did to mental arithmetic. We survived that.
The darker scenario: we’re discovering cognitive capacities have developmental windows. Miss the window, and the infrastructure is never built. This generation, trained on autocomplete, will be exceptional at refinement and terrible at generation. Brilliant at optimization, but unable to define what should be optimized. Fluent in the performance of thought, but incapable of thinking through something genuinely difficult without the algorithm.
We won’t know which scenario we got until it’s too late to run the experiment differently. But we’ll know by 2030, which is somehow both soon enough to be concerning and far enough away that nobody with budget authority considers it their problem.
The power dynamics map themselves. OpenAI sells the tools that make corporate communication indistinguishable from automation. Their executives publish thought leadership that’s indistinguishable from what GPT-4 produces, one begins to suspect this is not accidental. They fund educational initiatives teaching “AI literacy” that amounts to teaching students to be better prompters, which is either preparing them for the future or preparing them to be obsolete in a decade, depending on whether you think prompt engineering has a longer shelf life than typing pool management did.
The venture capital firms funding AI writing tools also own stakes in educational platforms, assessment companies, and corporate training programs. They profit from the automation. They profit from the confusion about what the automation is replacing. They profit from selling solutions to the problems the automation creates. It’s not a conspiracy, it’s just aligned incentives. But aligned incentives at scale create systems that look designed even when nobody designed them.
The people automating thought are the same people credentialing it. They profit from the ambiguity, from maintaining just enough plausible deniability about whether the thing they’re certifying still exists. Like a magician who profits from everyone pretending not to see how the trick works.
The narrative over the next two years will be that AI democratizes access to communication skills. That it levels the playing field. That it makes professional-quality writing available to everyone regardless of background or training. All of which is technically true in the same way that fast food democratized access to calories.
What won’t be in the narrative: the professional-quality writing was always just performance. Making the performance free doesn’t democratize anything, it just reveals that we built entire industries around a skill we couldn’t define well enough to distinguish from its mechanical reproduction. The people who already had power will still have it. They’ll just need different credentials to justify it. “Strategic thinking.” “Executive presence.” “Vision.” The same vague qualities we’ve always used to justify hierarchy, just with new language because the old language became too cheap.
This is already built. Not theoretical. The infrastructure for the performance economy is deployed and operational.
Students are graduating who can’t write without AI because they’ve never had to. Hiring managers are drowning in applications that hit every keyword because the algorithm optimized them. Universities have pivoted to assessment methods that tacitly admit the old ones measured nothing. Companies are restructuring roles around “AI collaboration” which means restructuring roles around “humans do the parts that are still expensive to automate and we’re working on that.”
The choices determining who thinks and who performs thinking are being made right now in school board meetings about adaptive learning platforms, in venture capital pitch decks about AI tutoring, in university committees about assessment reform, in corporate strategy sessions about workforce transformation. Nobody’s calling it tracking. Nobody’s admitting we’re building a two-tier system. The language is all about equity and efficiency and preparing students for the future of work.
But watch what the people with power are doing with their own children. Watch which schools are proudly adopting AI tutoring and which schools are quietly maintaining handwritten essays. Watch which jobs are being redesigned around AI augmentation and which jobs still require “strategic thinking” that mysteriously resists definition. Watch what gets expensive and what gets cheap.
Thinking is becoming a luxury good. Performance is becoming free.
The transition is nearly invisible because the performance keeps improving. Essays get more sophisticated. Reports more polished. Communication more effective. What’s harder to measure is whether anyone is thinking, or if we’ve simply built a society that no longer requires it for most roles.
The tell isn’t that AI writes like humans. The tell is that humans were already writing like machines, and we built entire industries on the pretense that this was thinking. Now the machines do the performance better than we can, and we’re about to find out if we remember how to do anything else.
We have eighteen months, maybe, before this becomes conventional wisdom... By the time the pattern is visible to everyone, we’ll call it inevitable. Because by then, it will be. Right now, while it’s still being built, the sorting is happening. Most people don’t know they’re being sorted. By the time the pattern becomes visible enough that everyone can see it, we’ll call it inevitable, because by then it will be. The question isn’t whether we’ll stop it, we won’t. The question is whether you’ll notice which side of the sort you landed on before the wall goes up.