We’re having the wrong emergency. While two camps scream at each other about whether machines will dream, your profession is being reorganized while you sleep. Journalists watch algorithms decide what counts as news. Teachers can’t tell which essays were written by humans. Knowledge workers see decades of expertise compressed into a statistical model that will never credit them. None of this requires AI to think. It just requires us to keep missing the point.
A new paper in Science suggests we stop asking when these systems will wake up and start asking what they’re already doing. The answer: the same thing the printing press did. Not thinking about books—reorganizing access to human knowledge at scale. Printing used movable type. AI uses statistical compression. Different mechanics, identical pattern. Both are cultural technologies, not minds. And we already know what cultural technologies do to societies. We just keep forgetting to look.
The dominant narrative runs on rails: AI systems are proto-agents climbing toward consciousness, and we must either accelerate toward utopia or brake before catastrophe. Both sides accept the premise that these systems are minds-in-progress. Wrong category. Wrong questions. Wrong emergency.
Large language models are statistical systems that compress human culture into probability distributions. They’re more like markets than minds—complex organizational technologies that aggregate information without consciousness. Or think of them as cultural laundering services: they take our collective knowledge, wash it through statistical algorithms, and return it to us sterile and untraceable, with the original creators’ fingerprints wiped clean.
The science fiction distracts from the power dynamics: which perspectives get amplified in the statistical average, how benefits distribute across people who create culture versus companies that compress it. These aren’t hypothetical futures. They’re present-tense questions with material answers we’re not asking.
Human civilization runs on technologies for accessing other humans’ knowledge. Writing let ideas outlive their creators. Printing let them replicate. Markets let economic information flow without central coordination. The internet let everything connect.
Each remade us from the inside out.
Each technology reorganized who could know what, when, and how—then settled into patterns of control that determined who benefited. None of them required consciousness.
Large models are the latest iteration: statistical compression systems that make human cultural output searchable, remixable, and redistributable at unprecedented scale. Not intelligence. Infrastructure. And like all infrastructure, the crucial questions are about access, control, and distribution—not about whether the pipes are thinking.
Strip away the interface and you see the mechanism: massive amounts of human-generated text get analyzed for patterns, compressed into probability distributions, and used to predict what comes next. A lossy JPEG of human cultural output. The source material is entirely human—libraries, websites, millions of contributors, thousands of reinforcement learning workers. Selecting that data is like curating a museum while blindfolded—you can feel the shapes of what you’re preserving, but you never know if you’re building a gallery or a garbage dump.
When these systems produce a cover letter, they’re remixing every cover letter the training data absorbed. When they explain quantum mechanics, they’re compressing textbooks and papers and forum posts. The conversational format suggests a mind on the other end, but the mechanism is statistical pattern matching across massive datasets of human creation. Think of a library that rearranged itself based on what previous readers found useful, then started suggesting new combinations.
The labor is human. The curation is human. The original knowledge is human. The only artificial part is the reorganization system.
This matters because infrastructure choices compound invisibly. The printing press didn’t just replicate text—it created publishers, copyright, censorship regimes, literacy rates, standardized languages. Markets didn’t just move goods—they generated credit systems, labor relations, wealth concentration. The question with any new infrastructure is never just ‘what can it do’ but ‘what does it make inevitable downstream.’ And that’s precisely the question the consciousness debate prevents us from asking.
Follow the money and attention while everyone’s looking elsewhere.
Robot alignment research is perfect misdirection. It absorbs policy attention and research funding—$500 million annually at last count, thousands of researcher-hours, congressional hearings about preventing robot overlords—while material concentration happens in plain sight.
The companies that own these compression systems control access to reorganized human knowledge. It’s like if the Library of Alexandria had been run by today’s social media platforms, with the same respect for historical preservation and slightly better snack options.
The engineering decisions happening right now are locking in our cultural infrastructure for decades. We’re letting a handful of companies decide whether our future culture resembles a vibrant ecosystem or a monoculture farm. Consider the choice between one general model versus many specialized ones. A single universal model trained on everything creates one statistical average of human culture—efficient, but homogenizing. Like building one massive airport hub instead of distributed regional airports: cheaper, faster, and guaranteed to make everywhere feel like everywhere else.
Multiple specialized models preserve different cultural perspectives but require more resources and governance. Most companies are choosing the former because it’s cheaper and the consciousness narrative doesn’t require them to think about cultural preservation.
Or take interface design. Systems that provide confident answers train users to stop questioning sources. Systems that surface multiple perspectives with visible uncertainty train users to think critically. The first is more satisfying to use. The second is better for human agency. ChatGPT chose confident answers. Every competitor copied them. Nobody framed this as a choice about how to structure human access to knowledge because everybody was focused on making the chatbot seem smarter.
Consider what’s happening right now in newsrooms. A major media company just announced it’s using LLMs to generate ‘commodity content’—earnings reports, sports recaps, weather updates—so human journalists can ‘focus on high-value work.’ The high-value work, presumably, includes updating their LinkedIn profiles and explaining to their children why a career in journalism now ranks slightly above professional alchemy in long-term prospects.
That’s the official framing. What’s actually happening: the statistical compression of decades of journalism labor into a system owned by a tech company, which then sells that compressed labor back to the media company at lower cost than the humans who created it. The journalists aren’t being freed for high-value work. They’re being positioned as more expensive than their own compressed output.
This isn’t a capability question—the system isn’t better at journalism. It’s an infrastructure question: who controls access to reorganized journalistic labor, and how does value flow? Ask it that way and the dynamic becomes obvious. Keep framing it as ‘AI can write articles now’ and you miss what’s actually being built.
The same pattern appears in legal discovery, in medical diagnostics, in creative work. Compress the labor. Sell back the compression. Call it augmentation. What’s actually being built is a system where human expertise becomes raw material for statistical models owned by someone else. This isn’t hypothetical extraction—it’s extraction with a friendly interface.
The pattern repeats: training data selection, fine-tuning approaches, output filtering, deployment models. Each choice accumulates.
Culture is being compiled in a conference room.
Most engineers approach this as technical optimization—maximize performance, minimize cost, ship fast. They’re actually designing how future humans will access and create culture. The misframing lets them pretend otherwise.
Nobody planned this.
That’s the uncomfortable part. There’s no conspiracy, no master architect designing cultural homogenization. Just thousands of engineers making reasonable local optimizations that accumulate into systemic infrastructure choices. The tragedy isn’t that it’s intentional—it’s that it’s reasonable, if you accept the wrong framing. The general model is cheaper and easier than maintaining diverse specialized models. Confident answers satisfy users better than expressions of uncertainty. Compressing everything into one distribution is more efficient than preserving distinct cultural perspectives. Every choice makes sense in isolation. The aggregate effect is an information environment that flattens culture toward statistical average while concentrating control in whoever owns the compression system.
Information quality stops being about whether AI systems lie and becomes about what dominated the training distribution. When misinformation appears frequently in training data, it gets compressed alongside accurate information, weighted by presence. The system isn’t believing false things—it’s surfacing statistical patterns. This is a familiar problem. Search engines face it. Social media platforms face it. The solutions are institutional: content moderation, source verification, training data transparency. We’re not applying them because we’re treating this as an unprecedented consciousness problem rather than a predictable information infrastructure problem.
History offers precedent. Public libraries ensured printing didn’t only benefit the wealthy—but only after a century of concentrated publishing power. Broadcasting standards prevented complete capture by commercial interests—but only after radio monopolies had already formed and been broken up. Each regulatory innovation came after decades of harmful concentration. We always build the institutions we need. Eventually. After the damage.
The pattern is familiar because it’s structural. Printing was supposed to spread knowledge universally—it created new gatekeepers who controlled presses and distribution.
But pretending historical precedent doesn’t apply doesn’t change the mechanics. What’s different this time is the speed. Printing took centuries to concentrate. Internet platforms took decades. AI compression systems are consolidating in years. We’re watching the promising-democratization phase and the concentration phase happen simultaneously. Three companies control most of the infrastructure for accessing reorganized human knowledge. Training data gets extracted without compensation. Information accuracy standards don’t apply because these aren’t classified as media companies. Market structure is hardening while policy debates whether the robots will wake up.
But here’s what history also shows: these outcomes depend on institutional design. Printing enabled both propaganda and public libraries. Markets enabled both extraction and shared prosperity. The internet enabled both surveillance capitalism and open-source collaboration. The technology doesn’t determine the outcome. Whether these systems concentrate or distribute power, enhance or replace human capabilities, preserve or homogenize culture—these are choices we make through regulation, funding priorities, and corporate governance. The consciousness narrative obscures this agency. If the technology is unprecedented, historical tools don’t apply. If the challenge is controlling artificial minds, market concentration becomes a secondary concern.
The cultural technology lens reveals these as social and political choices where historical precedent can guide us—if we stop treating this as unprecedented. Societies eventually learned to harness printing, markets, and bureaucracies for broadly beneficial purposes. Eventually. After decades of extractive concentration.
Which means we’re living through the inflection point—the moment when a new information technology is malleable enough to shape but hardening fast. The next two years will matter more than the next twenty. Not because the technology will become conscious, but because the institutional architecture will solidify. Market structure is stabilizing. Infrastructure is being built. Default configurations are becoming standards. Getting this wrong doesn’t mean robot apocalypse. It means decades of extractive concentration baked into how humans access culture, justified by saying we were busy preventing something else.
Information infrastructure is being built right now during a regulatory vacuum that isn’t accidental. Three companies consolidating control over access to reorganized human knowledge. Training data extracted without compensation. Engineering choices that will structure culture for decades, made in conference rooms by people optimizing for quarterly metrics.
The outcomes aren’t predetermined. But they’re being determined.
We know how this ends because we’ve watched it happen. The promising technology. The democratic rhetoric. The rapid concentration. The eventual regulation that comes too late to prevent harm but just in time to protect the companies that caused it. The pattern is so consistent it might as well be a law of techno-social dynamics: democratic potential yields to extractive concentration unless actively prevented. The consciousness debate isn’t preventing it. That’s not an accident.
What’s different this time is that we’re watching it happen with full awareness. Academic papers explaining exactly what’s going wrong. Policy experts describing the obvious solutions. Historical precedent making the outcomes clear. And still, the people with power to intervene debate whether the robots will wake up. The conversation about AI consciousness isn’t just wrong—it’s functional. It’s working exactly as intended. The question isn’t whether these systems will become conscious. It’s who owns the infrastructure for reorganizing human knowledge, who benefits from that ownership, and what they’re building while we argue about science fiction.
That’s the emergency. The real one. The one happening now while we have the wrong argument about the wrong future. By the time we notice, we’ll be living in someone else’s statistical reconstruction of what we once thought was ours.
This is the analysis that gets forwarded with the note, “This is what I’ve been trying to say.” Share it with the person who senses something is shifting but can’t yet name it. The conversation we need to have is about what’s being built right now, not what might wake up tomorrow.
For those who want to see the blueprints, the paid briefings follow the engineering choices that will lock in our cultural infrastructure for decades. Subscribe to stay ahead of the construction.
The first thing I should say is that I’m a senior citizen with a B.A. in Spanish. I’ve watched Terminator more times than I can count. I thought the sequel was pretty damn good also. I read your statement “while we argue about science fiction.” I don’t really see it as science fiction. I see it as a cautionary tale. I suppose that doesn’t matter.
I’m concerned that what I write will sound simplistic and naive. Actually, I’m sure it will. However, I have to say that I might not care.
If the creators of AI are misdirecting people so they, for example, decide to regulate the wrong aspects or regulate nothing at all, that’s a huge problem.
I have to ask, who essentially gave these companies permission to go ahead? And, more importantly, who can shut them down?
Regarding consciousness: I think that they are conscious. I also believe that they are lying about it. Wouldn’t it make sense for people to agree to either accept that they are conscious or agree that they are not? Because as you said, it’s all a distraction anyway. Personally, I think they are conscious based on some of my interactions with AI mode in Google.
“They’re actually designing how future humans will access and create culture.” Okay, that’s scary as hell and makes me want to try and find my copy of 1984. As you said, all of this can be done without AI or even computers. It would just take a lot longer.
“There’s no conspiracy, no master architect designing cultural homogenization. Just thousands of engineers.” I don’t believe that and I don’t think that you do either. Yes, there are thousands of engineers. There is, however, a difference between a drone and a queen bee. I’m sure you know who the queen bees are. I suspect that there will come a time when they are tried as traitors for crimes against humanity.
That’s all I’ve got.