Disinformation Without Effort
The economics of synthetic media favor quantity over quality. That's the point.
The browser holds forty-seven tabs, a museum of yesterday’s nonsense. A slowed-down video makes a politician appear drunk. An authentic photo with fabricated context. Audio clipped mid-sentence to reverse its meaning. She knows she’ll close maybe eight tabs today. By morning, there will be sixty more. This is not a job. It is archaeology performed on a landfill that grows faster than anyone can dig.
The math is brutal: fabrication is instant; verification is labor. Debunking properly (finding the original footage, documenting the manipulation, writing an explanation people might actually read) takes hours. And the debunking, when it finally arrives, reaches maybe one in twenty who saw the original. The lie doesn’t need to travel. It just needs to outnumber the truth.
We built detection systems for the wrong war.
The collective nightmare was always quality: deepfakes so perfect that seeing would stop meaning believing. We held congressional hearings about the uncanny valley. We funded detection algorithms. We prepared for a technological arms race where the best AI would win.
Imperfect forgeries (cheapfakes made in five minutes with free software) outperform the sophisticated ones. When a video looks too good, we flinch. When it looks like a blurry clip shared by a cousin, our defenses sleep. The uncanny valley works in reverse: slight wrongness triggers scrutiny, but normal appearance lets us default to trust because suspicion costs more cognitive effort.
The threat was never undetectable lies. It was innumerable ones.
The Economics of Exhaustion
Manufacturing doubt is now faster than resolving it.
RAND researchers identified this dynamic in 2016 as the “firehose of falsehood”: high-volume, continuous, and entirely indifferent to reality or consistency.
The insight was buried but devastating: “Volume is associated with persuasiveness.” Not sophistication. Not truth. Just volume. Follow this logic to its conclusion and you arrive somewhere absurd but familiar: a world where repetition is truth, where the most persistent voice wins regardless of accuracy. Picture a courtroom where the verdict goes to whoever can afford to repeat their version most often. That’s our information architecture.
We’ve already built that world. We just haven’t named it yet.
And we should have seen it coming. We’d already watched this movie.
Spam figured out the economics decades ago. The externality ratio for email spam runs about 100:1. For every dollar spammers make, society bears a hundred dollars in filtering expenses, wasted time, and fraud losses. The strategy works not because any individual spam email is convincing but because marginal expense approaches zero while checking each message remains stubbornly high. Send enough messages and some will get through. The filter’s job never ends.
Consider what we tolerate in email: a global infrastructure where most messages are garbage, where legitimate communication must constantly prove itself against a flood of fraudulent noise, where we’ve normalized the idea that receiving a message means nothing about its validity. We built systems to sort this chaos because we had to. Then we called it progress. Then we designed democracy’s information infrastructure with the same attitude and acted surprised when it performed about as well.
Disinformation campaigns discovered they could play the same game with something more valuable than credit card numbers: collective sense-making. The filter now is human attention. Unlike spam filters, human attention doesn’t scale. We run democracy’s epistemic commons on discount pharmaceutical economics and wonder why it fails.
The psychological mechanism is well-documented. The illusory truth effect shows that repetition increases perceived truth, and the biggest boost comes from hearing something just the second time. Prior knowledge doesn’t protect you. Even when you know a claim is false, repeated exposure makes it feel more true. The brain mistakes ease for truth, fluency for validity. A shortcut that evolution built for a slower world.
When researchers study detection systems, they find an uncomfortable phenomenon: algorithms that work in laboratory conditions degrade by 45-50% when deployed in the wild. Only 46% of detection researchers believe their techniques can generalize across different manipulation types.
Here is where the numbers become comedy. Detection markets grow at 28-42% annually. The threats they’re meant to address expand at rates of 900% or more. Pitch deck slide one: “97% accuracy!” Slide two (unwritten): “In controlled conditions. On yesterday’s attacks. Until next Tuesday’s model update.” Venture capital floods into companies selling better locks while someone outside drills new holes through the walls. The locks are getting quite sophisticated. The walls are becoming Swiss cheese. But the quarterly reports look fantastic.
This asymmetry doesn’t just enable the next tactic. It is the next tactic.
The Dividend That Pays First
Law professors Robert Chesney and Danielle Citron call it the “liar’s dividend”: the way that awareness of synthetic media hands bad actors a new weapon. As I explored in Voice Cloning and the End of Audio Evidence, once people learn that audio and video can be faked, they gain the power to dismiss authentic recordings as potentially fabricated. The defense doesn’t need to succeed to succeed. It only needs to create doubt.
But the mechanism underneath is more dangerous. The dividend doesn’t require sophisticated fakes to exist. It only requires the idea of fakes to be ambient. Abundance accomplishes that far more efficiently than quality.
A thousand cheap fakes don’t need any single one to be believed. They need to exist in sufficient quantity that the question “could this be fake?” becomes the default response to any evidence. The dividend pays out before anyone examines the evidence. It pays out when examination itself feels pointless.
Garry Kasparov, who has watched propaganda evolve across decades, put it precisely: “The point of modern propaganda isn’t only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.”
Philosopher Mark Satta coined a term for what this produces: epistemic fatigue. The cognitive depletion generated by trying to determine what’s true under conditions designed to make that determination maximally taxing. This isn’t cognitive failure. It’s economic failure. It’s not that we can’t distinguish real from fake. It’s that the effort required to do so, repeated infinitely, wears down the capacity for effort itself.
The woman with forty-seven tabs open knows this feeling. She’s not being fooled by the fakes. She’s being ground down by their proliferation. The goal was never to convince her any particular claim was true. It was to make the act of investigating feel like bailing out a sinking ship with a teaspoon. Which, mathematically, it is.
Exhaustion from endless checking enables the dividend from ambient doubt. The question becomes who can still afford to check at all.
Verification as Luxury Good
This connects to a question we’ve been avoiding. As I traced in The Authenticity We Can Afford to Care About, we outsourced authenticity-verification to platforms that profit either way. We click checkboxes declaring what kind of creator we are because having someone else sort the chaos feels easier than sorting it ourselves.
Volume-based disinformation exploits the same preference. Confirming claims requires time, attention, and expertise. All finite, all valuable, all increasingly unequal in their distribution. When the burden of checking facts exceeds the expense of producing claims by orders of magnitude, rigorous epistemic hygiene becomes a luxury good. Some people can afford it. Most people cannot. Not because they’re less intelligent, but because attention is zero-sum and the demands on it keep multiplying.
Medieval peasants couldn’t afford books, so literacy became a class marker. Now, the ability to investigate reality (to actually examine claims before believing them) joins the list. Not through legislation, but through the pricing of attention. We’re building an epistemic aristocracy by accident, which is the most American way to build one.
When the ability to investigate stratifies by resource availability, we get epistemic classes. Those with time and training live in one information environment. Everyone else navigates a different one. The shared ground where democratic deliberation might occur erodes not through censorship but through fatigue. No one bans the truth. They just make finding it too expensive for most people to bother.
The people who benefit from this arrangement understand the economics perfectly. Political operatives, state actors, disinformation-for-hire firms. They don’t need to pay for sophisticated deepfakes. They need to pay for quantity, and quantity has never been cheaper. A prompt and an afternoon. A few dollars and a distribution network already optimized for engagement over accuracy. The business model is elegant in its brutality: externalize the price of truth-checking onto individuals who can’t afford it, then profit from the confusion.
Some places have tried different arrangements. Taiwan built rapid-response infrastructure using civic hackers and the g0v community, deploying humor over rumor through established trust relationships. It works because they chose to pay for it collectively. But the model requires something most democracies haven’t demonstrated: the willingness to treat information infrastructure as infrastructure, deserving the same public investment as roads or water treatment.
What We Chose Not to See
The strangest thing about this phenomenon is how long we refused to name it. We kept watching for the sophisticated attack while the unsophisticated one accumulated around us. Every regulatory discussion, every research grant, every blue-ribbon commission focused on the threat of perfect forgeries. Meanwhile, the actual damage was being done by imperfect ones at scale.
This wasn’t an intelligence failure. It was a preference.
Sophisticated threats have sophisticated solutions: technical countermeasures, detection algorithms, protocols for confirming authenticity. These are problems that can be solved by experts, funded by grants, addressed by committees.
Proliferation threats do not. They require either matching resources (a game the attacker always wins) or changing the conditions that make abundance viable in the first place.
We didn’t want to change those conditions. They make viral content profitable, engagement metrics meaningful, attention valuable. The firehose of falsehood runs through the same pipes as everything else we’ve built.
This is where the technological determinism begins to fray. We talk about the attention economy as if it were weather, a natural phenomenon beyond human control. But weather doesn’t have quarterly earnings calls. Someone designed these systems. Someone maintains them. Someone profits from them. Facebook product managers, YouTube recommendation engineers, social media growth teams. Every day, they choose to optimize for engagement over accuracy, to treat fact-checking as someone else’s problem, to build tools that make quantity attacks cheaper without building proportional defenses.
The asymmetry between creation and investigation isn’t a law of physics. It’s a policy choice, distributed across thousands of decisions by companies, regulators, and users. Make creation costly. Make fact-checking cheap. Slow distribution. We don’t. Not because we can’t, but because the same infrastructure that enables the firehose also enables everything we’ve decided we want more than shared truth.
So we kept scanning the horizon for deepfakes while cheapfakes proliferated in the foreground. We funded detection systems that work in labs while the real world got noisier. We treated the problem as technological when it was economic, economic when it was political, political when it was, at root, about what kind of collective sense-making we’re willing to pay for.
The fact-checker closes her laptop. She debunked seven claims. The queue grew by four hundred twelve. Tomorrow, the arithmetic remains the same.
The disparity stays open because keeping it open is profitable for those who do not have to live with the consequences. This is not fate. It is a choice, made continuously by identifiable actors with identifiable interests.
The fatigue isn’t a side effect. It is the product. And we are buying it in bulk.








Fantastic piece on the volume problem. That spam analogy is spot-on because it reveals how weve already normalized infrastructre where checking becomes someones elses burden. I work in content moderation and the asymmetry is crushing - we can flag maybe 200 items per day while automated systems pump out thousands per hour. The real threat isnt perfect fakes we cant detect, its mediocre ones we dont have time to debunk.