The Year the Hype Cycle Broke
What prediction machinery optimized for hope reveals about us
In January, Tesla deleted its 2016 blog post. The one that claimed every Tesla had “all the hardware necessary” for full self-driving. Nobody made a fuss. Everybody noticed. This is how collective disillusionment arrives in 2025: not as dramatic revelation but as quiet archive maintenance, digital housekeeping that rearranges the furniture to hide bloodstains. In the archived corners of the internet, some memories are more disposable than others.
The deletion is almost perfect as farce. Not because Tesla tried to memory-hole an inconvenient forecast. That’s standard corporate behavior. The farce is that the promise itself was always theater. Elon Musk projected one million autonomous Tesla robotaxis on roads by 2020. He claimed “true” Full Self-Driving would arrive by the end of 2019. Then 2020. Then 2021, 2022, 2023, 2024, and 2025. By 2024, year six of the promise, the projection had become the product itself. In January 2025, he admitted that Hardware 3 (installed in vehicles sold between 2019 and 2023) requires physical upgrades to support the capabilities that buyers were promised would arrive via software update.
This pattern isn’t forecasting failure. It’s machinery for mobilizing capital and attention. Once you see it that way, 2025 makes more sense. Not as the year projections failed, but as the year we watched ourselves overpromise and underdeliver in real time, with full knowledge, and kept going anyway.
The Machinery’s Purpose
Capital requires fiction to function. A VC fund that hedges all bets makes no bets. A company that awaits proof before investing falls behind. The projection doesn’t need to be accurate. It needs to be credible enough to justify allocation decisions that would otherwise seem irrational.
Consider Waymo. By November 2025, the fleet reached 2,500 robotaxis performing 250,000 paid rides weekly across Phoenix, San Francisco, Los Angeles, and Austin. This is, by any reasonable measure, a triumph. But triumph, like progress, has become a relative term in 2025. We measure success against the bar we set, not the one we cleared.
Yet the vehicles navigate within carefully mapped zones, refuse freeways until this year, and operate at what the industry classifies as Level 4 automation. Capable in constrained circumstances. Utterly lost outside them. No Level 5 vehicle (the kind that can drive anywhere, anytime, in any conditions) exists anywhere on Earth.
Meanwhile, Tesla’s “Robotaxi” service in Austin runs with supervisors seated inside, fingers hovering over override controls. This gap between Level 4 and Level 5 isn’t a matter of incremental progress. It’s the gap between impressive capability and the thing we’ve been promising for a decade.
We keep projecting Level 5 autonomy when it remains definitionally impossible. What are we really claiming?
We’re selling the narrative that justifies the capital. And in that sense, the promises worked perfectly.
The same pattern emerged with AI agents, except this time the gap between capability and declaration wasn’t a bug. It became the entire product category.
The Year of the Agent That Wasn’t
“The age of agentic AI has arrived,” Forbes declared in response to Jensen Huang’s proclamation that 2025 would be “the year of AI agents.” IBM surveys showed 99% of developers were exploring or developing agents. The infrastructure was ready. The models were sophisticated. The future had arrived.
It hadn’t. What arrived instead was something industry analysts began calling “agent-washing.” Products branded “agentic” that were closer to traditional automation wrapped in conversational interfaces.
When researchers at HuggingFace studied actual agent performance, they recommended human supervision over autonomy. When enterprise customers deployed agents for multi-step tasks, 70% failed. The math was brutal: even with 90% success per step, a five-step task succeeds only 59% of the time.
IBM’s Marina Danilevsky was blunt: “I’m still struggling to truly believe that this is all that different from just orchestration. You’ve renamed orchestration, but now it’s called agents, because that’s the cool word.”
Picture the meetings where this happened. “We can’t call it orchestration. Nobody gets promoted for competent orchestration. But agents? Agents are autonomous! Agents make decisions! Agents justify roadmaps and conference keynotes and another funding round.” Somewhere, a product manager renamed a cron job and updated their LinkedIn.
The rebranding was perfect. Orchestration sounds like IT infrastructure: competent, boring, unglamorous. Agents sound like science fiction. Same code. Different branding. Suddenly everyone’s exploring agents because that’s what you do when 99% of developers are exploring agents. The circularity is the point.
By autumn, the verdict was unavoidable: agents work when task predictability exceeds 90% and decision logic remains simple. They fail at anything requiring genuine reasoning or adaptation to novelty. SiliconANGLE summarized what many were thinking: “2025 won’t be ‘The Year of the Agent.’” True autonomous operation remained “rare, and in many cases undesirable.”
But the Year of the Agent declaration had already done its job. It coordinated investment, research priorities, conference programming, and product roadmaps. The assertion created the market it described, even if the market didn’t deliver what was described.
And then performance metrics revealed something stranger still: the gap between what people believed was happening and what was actually happening had become unbridgeable. Even for experts.
The Perception Gap
The numbers should have been the salvation. If agents disappointed and robotaxis stayed local, at least generative AI would reshape how we work, delivering measurable productivity gains even if the autonomous future remained stubbornly five years away.
The numbers told a stranger story.
The St. Louis Federal Reserve found workers were 33% more efficient during hours they used generative AI. Customer service studies showed 15% gains. Industries heavily exposed to AI saw output grow 27% between 2018 and 2024, compared to 7% in the six years prior. The transformation was real.
Then a randomized controlled trial by METR in July found that experienced open-source developers using AI tools were 19% slower at completing real programming tasks. Not slightly slower. Not “approximately the same with different tradeoffs.” Nineteen percent slower. The punchline: those same developers believed they had been 20% faster.
Not “thought they might have been” or “estimated.” Believed. With confidence.
This is the gap worth examining. How do we know if we’re effective? We track outputs, measure velocity, compare to yesterday’s throughput, all of which AI can make feel faster (more code written, more tickets closed, more perceived motion) while the actual delivery timeline stretches. The feeling of efficiency becomes the metric. The metric becomes the reality. Until someone runs a controlled trial and discovers we’ve been measuring theater.
The developers weren’t measuring velocity. They were measuring motion. Like stepping on a scale that tells you what you want to hear while your clothes fit differently.
The Stack Overflow 2025 Developer Survey found only 16.3% of developers said AI made them more efficient “to a great extent.” The largest group (41.4%) reported little or no effect. Total factor output impact in 2025: 0.01 percentage points. A rounding error. The 33% hourly gains translated to 1.1% at the economy-wide level.
The pattern revealed itself: generative AI made certain workers faster at certain tasks within certain contexts while doing nothing measurable for most people doing most things. The benefits clustered among lower-skilled workers doing routine tasks and evaporated for experts working on complex problems. One population got more efficient. The other got more distracted.
Penn Wharton economists noted the historical parallel: gains from electricity took forty years to materialize because companies had to rebuild factories. The technology wasn’t the bottleneck. Organizational restructuring was.
But the perception gap reveals something electricity didn’t create. We believe we’re faster while actually being slower. The feeling of transformation substitutes for measurable change.
Performance theater becomes indistinguishable from performance reality.
Which should have triggered caution. Instead, the money accelerated.
The Investment Paradox
We kept spending anyway.
AI capital expenditure for 2026-2027 exceeds $500 billion while American consumer spending on AI services sits at $12 billion annually, a forty-to-one ratio of investment to actual market demand. The economics resembled religion: faith outpacing function by an order of magnitude. A Yale analysis found that 95% of generative AI pilots had achieved zero return despite $30-40 billion in spending. The share of companies abandoning most of their AI projects jumped from 17% in 2024 to 42% in 2025.
Deloitte’s survey captured the paradox: 85% of organizations increased AI spending in the past year, and 91% planned to increase again, even as typical ROI payback stretched to 2-4 years versus the 7-12 months expected.
“If we don’t do it, someone else will, and we’ll be behind,” one consumer goods executive told surveyors, articulating the logic that kept money flowing despite absent returns.
The executive’s logic is impeccable. Behind what? Behind everyone else who’s also spending money on projects achieving zero ROI. But “everyone else” is a powerful incentive structure. Nobody wants to be the only company that didn’t commit capital to AI when AI was the thing you were supposed to commit to.
It’s coordination through mutual anxiety. A system that feeds on its own momentum, accelerating toward nowhere in particular.
The narrative creates its own momentum. This isn’t a bug; it’s a coordination mechanism. We keep allocating resources not because the systems work but because everyone spending validates everyone else’s spending. It’s collective proof we’re not crazy.
Until the 42% of companies abandoning projects suggests maybe we were.
Future historians might call 2025 the year of collective cognitive dissonance, when entire industries knowingly poured resources into technologies they knew wouldn’t deliver as promised, though we probably won’t wait for historians to tell us what we already know. They won’t marvel at the foolishness. They’ll marvel at the coordination.
And yet the most telling admission came from an unexpected source. Sam Altman said the quiet part aloud: “Are we in a phase where investors as a whole are overexcited about AI? In my opinion, yes.” This from the CEO of OpenAI, the company most responsible for generating that overexcitement.
The CEO of the company most responsible for AI investment frenzy admits investors are overexcited about AI. Then continues raising money from those same investors. Then those investors continue investing despite hearing him say they’re overexcited. Everyone knows everyone knows everyone’s overexcited. The capital flows anyway.
Self-awareness doesn’t break the cycle. It just makes us self-aware participants in collective theater.
The cycle’s normalization runs deeper than executive quotes. Gartner moved generative AI from the Peak of Inflated Expectations to the Trough of Disillusionment in its 2025 Hype Cycle. AI agents and AI-ready data sat at the Peak. The pattern showed where next year’s disappointment would concentrate.
The Gartner Hype Cycle itself normalizes the cycle: Peak of Inflated Expectations, then Trough of Disillusionment, eventually Plateau of Productivity. Disillusionment isn’t failure. It’s a phase. Perhaps. Or perhaps the cycle is a story we tell to domesticate uncertainty, to make change feel predictable even when our forecasts fail.
These failures across domains (vehicles, agents, performance) revealed systematic biases rather than isolated miscalculations.
What the Machinery Reveals
Roy Amara articulated what became Amara’s Law: we overestimate technology’s short-run effects and underestimate long-run impact. But 2025 exposed something deeper. The machinery doesn’t just miscalibrate timing. It systematically overestimates transformation in specific domains while ignoring others entirely.
Autonomous vehicles demonstrate this precisely. The systems improved dramatically. Waymo’s cars navigate complex urban environments with remarkable sophistication.
What projections missed was that edge cases don’t diminish with scale. They multiply, like trying to catalog every possible way a Tuesday can go wrong. The “long tail” of unusual situations grows longer the more terrain you cover.
Infrastructure changes that could have accelerated deployment were never discussed because promises assumed one-for-one replacement of human capability.
Agent assertions revealed the same blind spot. Models became more capable, but capability at individual tasks doesn’t compose into capability at task sequences. Each step introduces failure probability. Autonomy requires not just competence but robustness. And robustness turns out to be harder than competence by orders of magnitude.
The performance claims missed something equally obvious in retrospect: tools improve tasks, but tasks aren’t jobs. A developer faster at writing code isn’t a developer faster at delivering software, because software delivery involves code review, integration, testing, and deployment. The bottleneck simply moved downstream.
Academic research identifies specific forecasting biases: engineers assume final edge cases will take proportional effort when they require exponential resources. Those with stakes systematically overpredict adoption. Forecasters confuse desire with expectation.
These biases aren’t bugs. They’re features of how we navigate uncertainty. The gap between perception and reality persists because the machinery rewards forecasting, not accuracy. Developers believed they were faster while actually being slower. Organizations kept spending while achieving zero returns.
The question isn’t whether AI will eventually reshape everything. What matters is what it means that we can’t stop ourselves from accelerating through the enthusiasm phase even when we know, collectively, that we’re doing it.
What 2026 Looks Like
2025 was the year we watched ourselves overpromise and underdeliver in real time, with full knowledge, and kept going. Projections always fail on first contact with reality. The question is what it means that we can’t stop building them.
Forecasting machinery optimized for hope rather than accuracy might be humanity’s most honest technology. Not because it reveals the future, but because it reveals what we need to believe to coordinate action. The forecasts get built not despite their inaccuracy, but because the gap between projection and reality is where capital flows and careers get made.
We’ve built forecasting machinery about our forecasting machinery.
And yet the projections for 2026 are already arriving. AGI timelines. Autonomous vehicle deployments. Agent capabilities. Each delivered with the same confidence, the same specificity, the same inevitability that 2025 promised.
Knowing this pattern exists doesn’t stop us from running it. It just makes us self-aware participants in collective theater. We’re not trapped by ignorance. We’re trapped by coordination dynamics we can see but can’t escape. The machinery optimizes for hope because hope coordinates collective action when the future is uncertain and stakes are existential.
We’ll keep building forecasting machinery we know is wrong. What matters is what happens when self-awareness about the cycle becomes part of the cycle itself, when knowing we’re doing theater doesn’t stop the performance because the alternative is standing still while everyone else moves.
2026 will be the year we discover whether meta-awareness breaks the cycle or just adds another layer of rationalization. My bet: we already know the answer. We’re just waiting to see if admitting it counts as insight or complicity.
The machinery’s true output isn’t the future it promises. It’s the proof of what we’ll collectively believe when all the data says we shouldn’t.
Whether that’s tragedy or farce depends on your position in the system. Most of us occupy both positions simultaneously.










Adoption of bleeding edge technology, regardless of whether it actually bestows benefits, is a CYA tactic. As long as all the executive butts are covered, it's all good.