The Inevitability Reflex
What Trillion-Dollar Bets on Uncertain Outcomes Reveal About Corporate Psychology
The Question Nobody’s Asking
By the time you finish this paragraph, another million dollars will have poured into AI ventures. The surge from $3 billion in 2022 to $25.2 billion in 2023 wasn’t growth. It was combustion. Venture capitalists now control over 70% of this river of capital, each hunting for the next OpenAI like it’s the last escape pod on a burning ship.
Goldman Sachs’ head of global equity research watched this unfold with the weary recognition of someone who’d seen it before. Jim Covello, whose early career included a front-row seat to the dot-com crash, issued a warning in July 2024 that should have landed harder than it did: “Overbuilding things the world doesn’t have use for, or is not ready for, typically ends badly.”
The interesting part isn’t the warning. It’s that nobody paused.
Goldman Sachs presumably continued participating in AI deals while their own analyst publicly questioned the thesis. Microsoft made a $10 billion bet on OpenAI. By 2025, companies weren’t asking “whether AI would matter.” They were asking only about timing and implementation.
Consider the venture capitalist who privately admits to his partners that he can’t identify a single defensible AI business model, yet publicly declares “the question is not whether but how fast.” That gap between private doubt and public certainty isn’t hypocrisy. It’s the new operating system.
This shift from conditional to declarative, from “whether” to “how,” happened so smoothly that questioning the premise now feels naive rather than prudent. That transition is worth examining. Not because the answer is obvious, but because we stopped asking the question.
The Exponential Curve as Cognitive Trap
We’ve seen these curves before: those beautiful, terrifying hockey sticks that promise infinite growth in a finite world. Physics, markets, and history all have strong opinions about this. Yet here we are again, treating 740% growth as confirmation rather than the mathematical equivalent of a fever chart.
Humans can’t not chase exponential curves, even when pattern recognition tells us how this ends.
The comparison to the late 1990s isn’t subtle. The dot-com boom produced genuine technological change and spectacular overinvestment, both true simultaneously. Pets.com became a punchline while Amazon survived and dominated. The technology matured. The valuations didn’t. They never do.
Covello drew explicit parallels to “technologies that don’t pan out in the end; virtual reality, the metaverse, and blockchain are prime examples.” But the parallel that matters isn’t whether this AI cycle will crash. It’s why we treat each cycle as though the previous ones never happened, as though this time will be different because the technology is different.
The pattern isn’t the technology. The pattern is our response to the technology.
We’re optimizing for something, but it’s not learning from experience.
The Productivity Paradox as Epistemic Crisis
The economic projections carry the confidence of a weather forecast in a hurricane. The IMF estimates AI could add anywhere from 1.3% to 4% to global GDP. That’s a range so wide it’s less a projection and more a confession of ignorance. Yet these numbers circulate like gospel, as if mathematics somehow validates our collective delusion.
We even have a named paradox for the gap between technological enthusiasm and measurable reality: the “productivity paradox.” Robert Solow observed in 1987 that “you can see the computer age everywhere but in the productivity statistics.” Computing capacity increased 100-fold in the 1970s-1980s. Labor productivity growth actually slowed from 3% annually in the 1960s to 1% in the 1980s.
We have an established term for “the gains we assume exist can’t yet be measured,” and we treat this as reassuring rather than alarming.
The standard explanation treats productivity like a distant relative who’s always running late: sure to arrive eventually, if we just keep the party going. Organizations need time to integrate new tools, restructure workflows, and train workers. The benefits will materialize. We just need to keep investing through the long, expensive silence between announcement and arrival.
That narrative constructs certainty from uncertainty. It takes “we don’t see the gains yet” and turns it into “we don’t see the gains yet, but they’re coming, trust the process.”
The productivity paradox becomes a permission structure for continued investment despite absent returns.
This matters because the current burst of AI investment is happening during the paradox, not after it. We’re making hundred-billion-dollar capital allocation decisions based on productivity models that openly acknowledge their own unreliability. The IMF projects 0.5% annual GDP growth from AI between 2025-2030, then hedges with simulation models showing outcomes ranging across that 1.3-4% spectrum depending on “underlying productivity assumptions.”
We’re building infrastructure for gains we assume will arrive, like cargo cultists constructing runways for planes that haven’t been scheduled.
The Democratization Myth
The numbers reveal something uncomfortable. AI is simultaneously becoming more accessible and more concentrated, and we’ve decided to call this “democratization.”
Training costs for standard AI models fell dramatically. An image classifier that cost $1,000 to train in 2017 dropped to approximately $10 by 2019: a 100-fold reduction. Inference costs for GPT-3.5-level systems plummeted more than 280-fold from November 2022 to October 2024. This gets framed as accessibility. Small startups and individual developers can now deploy sophisticated AI with minimal infrastructure.
But frontier model costs tell the opposite story. GPT-3 cost $4.6 million to train in 2020. By 2023, estimates for GPT-4 and Google’s Gemini Ultra reached $78-191 million. The most advanced systems aren’t getting cheaper. They’re getting exponentially more expensive, roughly tripling in cost each generation.
This creates a digital feudalism operating in plain sight while we call it progress. You can use the tools the tech lords built. You can access their models through APIs like peasants drawing water from the castle well. You can deploy AI capabilities without the armies of PhD researchers required to build the keep. What you can’t do is build your own castle. The cost barrier ensures that frontier AI development concentrates among a few firms with hundred-million-dollar research budgets.
We’ve redefined “democratization”: instead of “distributed capability to build competing power,” it now means “rental access to centralized power.” When did that definitional shift happen? And who benefited from it going unexamined?
The bifurcation isn’t a side effect. It’s the wealth concentration mechanism operating at scale.
Cheap inference costs ensure broad adoption and dependency. Expensive frontier development ensures concentration of capability and capture of value. We’re building the infrastructure for this digital feudalism and calling it accessibility.
The Automation Asymmetry
AI adoption spread rapidly across sectors. In 2017, 20% of companies reported using AI in at least one business area. By 2022, that reached 50%. The applications ventured beyond tech giants into manufacturing, marketing, healthcare, strategic planning.
The specific use cases reveal something about whose productivity we’re optimizing for. We’re deploying AI to create personalized music playlists and optimize retail inventory while climate models remain comparatively underfunded. We’re automating medical diagnosis, legal research, strategic decision-making, and creative writing: tasks that humans often find most meaningful. Meanwhile, physically demanding, low-wage work remains largely untouched by automation.
This isn’t random.
It’s revealed preference about what “productivity” means and whose work we value.
The standard framing treats AI as both substitute and complement for human labor. Doctors use AI to analyze patient histories then apply clinical judgment for diagnosis. Marketers use AI to analyze consumer data then craft targeted campaigns. In these cases, AI augments rather than replaces.
But that framing obscures a deeper question. Why are we automating work that gives life meaning while leaving menial work to humans? The technological constraint is real: it’s harder to build a robot that can clean a hospital room than one that can read an X-ray. But the allocation of research funding isn’t neutral. We’re solving for automating cognitive work because that’s where the value capture concentrates, not because that’s what produces human flourishing.
Studies suggest 12-14% of workers may need to transition to new occupations by 2030. The standard policy response recommends “retraining programs” for displaced workers. Retraining them for what, exactly? If AI can analyze medical images, write articles, and perform legal research, what work remains that’s both automatable enough to require retraining and meaningful enough to be worth doing?
The White House analysis warned AI could “meaningfully increase aggregate income inequality” by substituting for middle-class occupations while complementing higher-paying ones. The IMF concluded that “in most scenarios, AI will likely worsen overall inequality.” Advanced economies show more than double the “AI preparedness” of low-income countries.
“AI preparedness” measures the institutional capacity to destabilize employment structures faster. The willingness to bet social stability on productivity gains that haven’t materialized in economic statistics. That’s not a neutral metric. It’s a status game disguised as readiness assessment.
This isn’t a bug in the system. It’s what happens when optimization targets productivity divorced from distribution. When we automate meaningful work because that’s where ROI concentrates. When we measure preparedness by speed of upheaval, not quality of outcomes.
We designed this result.
Now we’re acting surprised by it.
The Status Game of Inevitability
The language shifted like tectonic plates: slowly enough that no one noticed the earthquake until the ground had already moved. Five years ago, executives asked whether AI would matter. Today, they only ask how quickly they can implement it. Being wrong about “whether” became career suicide. Being bullish on AI became the modern equivalent of carrying a crucifix to ward off vampires: less about efficacy than about signaling you belong to the right tribe.
This is status contagion operating through institutional FOMO. Nobody wants to be Blockbuster laughing at Netflix, Nokia dismissing the iPhone, or any of the other cautionary tales about incumbents who missed the shift. The pressure isn’t just economic. It’s reputational. Questioning the premise feels naive.
But who benefits from “whether” being off the table? Every time someone says “the question is not whether but how,” they’re foreclosing debate about whether we should be doing this at all. That’s not neutral framing. It’s the exercise of power disguised as recognition of reality.
The inevitability narrative constructs the future it claims to predict. Once everyone assumes AI change is certain, the capital flows accordingly, the hiring follows, the infrastructure gets built, and the assumption becomes self-fulfilling. Not because it was inevitable, but because we treated it as inevitable and made it so.
This matters because inevitability is a permission structure. If change is coming regardless of our choices, then the only rational response is to position yourself advantageously within it. You can’t be blamed for adaptation to forces beyond your control. The outcomes aren’t your responsibility. You’re just responding to reality.
Except the change isn’t inevitable. It’s chosen.
Collectively, incrementally, through millions of investment decisions made by people who believe they’re responding to inevitability rather than creating it. The gap between those two framings is where agency disappears.
What This Reveals About Us
We’re making the largest technology capital allocation in decades based on economic models with 300% outcome variance, during a documented productivity paradox, while openly acknowledging that previous technology hype cycles ended in crashes. And we’ve decided this constitutes sufficient certainty to proceed.
That behavior pattern reveals something about modern capital allocation and institutional risk tolerance. We’re more comfortable with massive bets on unproven futures than with the discomfort of questioning whether the bet makes sense.
The momentum carries more weight than the fundamentals. Being part of the herd offers more protection than being right.
The bifurcation between cheap access and expensive development isn’t a temporary transition phase. It’s the new equilibrium. We’ve built a system where AI capability concentrates among a few firms while dependency spreads broadly. We call this progress because the alternatives are unpalatable: slowing down (falling behind) or admitting we’re constructing technological feudalism (sounding conspiratorial).
Most people reading this work somewhere in the optimization pipeline. Most of us contribute to building this system through daily work decisions that feel small and reasonable in isolation. The product manager prioritizing features that increase engagement. The engineer optimizing algorithms that reduce costs. The investor allocating capital to productivity gains.
Each decision makes sense locally. The emergent pattern makes sense to almost nobody.
We’re collectively building futures that individually most of us didn’t choose and wouldn’t design. We treat this as inevitable rather than constructed. Questioning the trajectory feels naive while accelerating it feels sophisticated.
What does this reveal about us?
When faced with exponential curves and massive uncertainty, humans default to momentum over reflection, status games over strategy, and certainty cosplay over honest acknowledgment of what we don’t know. We’ve shown that inevitability narratives override agency even when the inevitability is obviously constructed.
And we’ve demonstrated that we can watch this happening, recognize the pattern from previous cycles, know intellectually how it works, and still participate anyway. The social and economic costs of being the one who opts out exceed the philosophical appeal of being the one who was right.
The trillion-dollar betting pool isn’t impressive because of the capital involved. It’s impressive because it’s a collective hallucination. Everyone placing bets has watched previous pools collapse, recognizes the pattern, and sees the warning signs. And still decides that being part of the consensus carries less risk than being right alone. We’re not betting on AI anymore. We’re betting that enough people will believe in the same future to make it real, regardless of whether that future makes any sense at all.
That’s the reveal. Not what AI will do to productivity or employment or inequality. What our response to AI reveals about how we construct certainty from uncertainty, how status games override strategy, and how inevitability narratives make us comfortable with outcomes we wouldn’t consciously choose.
The question isn’t whether AI will reshape work and economics. The question is what it means that we stopped asking whether we should reshape work and economics, and why that shift happened so smoothly that noticing it now feels like pointing out that the emperor has been naked for years while everyone complimented his tailoring.









