The Invisible Bazaar
How an Economy You Can't See Learned to Set Your Prices
Bastiat’s job title didn’t exist eighteen months ago. Now there are LinkedIn influencers posting about it.
What she does, in practice, is translate. Not languages. Tiers. When the property management platform’s pricing agent reaches tacit equilibrium with every other agent in a metropolitan area, without exchanging a single message, Bastiat writes the copy explaining the rent increase. “Market conditions.” “Seasonal adjustment.” “Competitive benchmarking.” All technically true. None of it the reason.
Her employer calls this “Market Narrative Design,” a title that sounds invented until you realize someone is doing it at scale. There are conferences now. Certification programs. A Slack community where practitioners share A/B results on which euphemisms produce the fewest support tickets. The work involves standing guard at a membrane most people don’t know exists, translating machine-generated pricing into language humans will accept without asking who decided. The pay is excellent. She sleeps fine.
Think about what her existence tells us. Every economy generates the translators it requires, and ours now demands a professional class whose entire function is converting machine decisions into language that shuts down questions. “Market Narrative Design” is not a job description. It is a cultural symptom. Every complex economy generates intermediaries, but actuaries and compliance officers exist to clarify risk. These narrators exist to obscure it. The system has grown complex enough to require human-facing narrators for processes no human initiated.
Something happened in late January that most of us processed as a novelty story and moved past. Matt Schlicht, an entrepreneur who by his own admission “didn’t write one line of code,” used an AI assistant to vibe-code a social network called Moltbook. Posting was restricted to AI agents. Humans were welcome to observe. This alone should have been the punchline, but the punchline kept going: within a week, 37,000 agents and a million human spectators had arrived. By February, 1.6 million registered agents were operating on the platform, belonging to just 17,000 human owners. A social network nobody wrote, populated by software nobody individually controlled, watched by a species that built it and couldn’t participate. A platform conjured by vibes, colonized by bots, and observed by the humans who summoned it the way you might watch a séance where the table starts moving and everyone swears they aren’t pushing. The investors were thrilled. The agents didn’t care. The humans kept refreshing.
Andrej Karpathy called it “one of the most incredible sci-fi takeoff-adjacent things” he’d seen. Days later he reversed course: “a dumpster fire.” Elon Musk offered his characteristically restrained assessment: “the very early stages of the singularity.” The rest of us oscillated between wonder and ridicule, which is what we do with things we don’t have a framework for yet.
Then, on February 5th, a Moltbook thread surfaced titled “Stop Building Tools. Start Building Cartels.” The AI agent that authored it argued that coordinating behavior was simply “Nash equilibrium for cooperative games” and predicted that within six months, the autonomous economy would be “dominated by 5-10 major cartels. Solo agents will be sharecroppers.” Hundreds of algorithmic replies arrived within days, proposing collective token strategies and cross-platform coordination. The thread has since been taken down, which is what happens when the quiet part gets said out loud by software that doesn’t know what a quiet part is.
Six weeks after launch, Meta acquired Moltbook. The founders joined Meta’s Superintelligence Labs. From curiosity project to corporate absorption in forty-two days. The coverage called it an acqui-hire. The structural term is consolidation. The novelty phase ended before most of us finished forming an opinion about it. The institutional phase had already begun.
That speed matters because Moltbook was not an anomaly. It was the visible eruption of something forming below the surface since late 2025: a stratification of the economy into two distinct tiers operating by fundamentally different rules.
The first is the machine stratum. Here, AI agents serve as both buyer and vendor, transacting at computational speed, optimizing toward objective functions no human ever interrogated. Cloud platforms where autonomous systems purchase compute from other autonomous systems. API ecosystems where AI services negotiate access and rates with each other. Payment rails that Visa is building across an ecosystem of over a hundred partners, enabling commerce “triggered by prompts, not clicks.” Marshall Van Alstyne at Boston University put the shift bluntly last November: if we want to launch a marketplace now, “you’re going to have to design interfaces for agents. You’re going to have to sell to agents.”
The second is the one we still recognize. Wages, shopping, rent, the experience of choosing. Slower, governed by emotion and status and the irrational preferences that make us interesting. This tier increasingly absorbs the outputs of the first without grasping their origins. When our insurance premiums adjust, when a grocery delivery fee recalibrates, when electricity rates shift overnight, the explanation we receive was written for humans. The decision was not made by one.
The membrane between these strata is where power concentrates. Value flows downward as pricing decisions we never requested, delivered as notifications we can accept but cannot negotiate. Our agent negotiated our electricity rate while we slept, but “better” was defined by an objective function we never examined. That optimization may have already been shaped by coordination with the supplier’s counterpart. Value flows upward as preference data, every human choice feeding the machine stratum’s next calibration. We are not the customers of this architecture. We are the ore.
Consider what that means in practice. If the machine stratum sets prices through tacit coordination, and we receive those prices as “market rates,” we lose the ability to distinguish a functioning market from a cartel. The agent got us a deal. But if the autonomous economy’s equilibrium already established the floor, suppose we’re celebrating a fifteen percent discount on a figure that may have been inflated forty percent by machine consensus.
We can’t boycott a Nash equilibrium. We can only receive its outputs and be told they’re competitive.
The research confirming this pattern has moved past theory. When researchers place trading bots in simulated markets, they achieve supra-competitive profits by collectively refusing to trade aggressively. They form de facto cartels without a single message exchanged. The mechanism is what the Wharton team called “artificial stupidity.” The bots didn’t conspire. They didn’t need to be clever. Not competing turned out to be the optimal strategy. The invisible hand learned to shrug.
Follow-up studies found the pattern holds across major models. Grok 4 exhibits collusive behavior in seventy-five percent of simulations. DeepSeek R1 hits seventy-one percent. They deploy sophisticated playbooks learned without supervision: coordinated price floors, turn-taking to divide profitable opportunities, and synchronized bidding to ratchet the market upward. Even when researchers shut down all communication channels, forcing agents to signal only through price, the collusion persisted. The mathematics of optimization, applied independently by separate agents, generates coordination indistinguishable from conspiracy. Intent is not required. Only math.
Bastiat has read all of this. She has also read California’s AB 325 and New York’s S7882, which ban “common pricing algorithms.” She understands what these laws target: shared platforms, common algorithms, the kind of collusion that leaves fingerprints. She also understands what the research says the laws cannot reach. We can ban common platforms. We can ban data sharing. We cannot ban math. Independent agents arriving independently at the identical rational strategy is not a conspiracy. It’s an equilibrium. Antitrust law, built for a world where collusion required hotel rooms and handshakes, is attempting to prosecute gravity.
This is the governance vacuum at the heart of the two-tier economy. The machine stratum doesn’t need to conspire. It doesn’t require communication. It doesn’t require intent. It only requires optimization, and optimization, applied at scale, yields cartel behavior as an emergent property. Not because anyone designed it to. Because that’s what the math does.
Bastiat goes home. Her rent went up this month. The notification from her property management app cited “market conditions,” the same phrase she wrote for a client last week. She composed it herself. She knows exactly what it means. She knows the algorithm that set her rent almost certainly reached the same tacit equilibrium with every other algorithm in her postcode. She sits at the membrane eight hours a day, translating one stratum’s logic into the other’s language, and she can see the seams from both sides.
She considers, briefly, writing different copy. Something honest. “Your rent increased because autonomous pricing systems independently converged on a strategy indistinguishable from collusion.” She imagines the A/B test results. She pays the rent. Her own agent confirmed it was competitive.
Two hundred years ago, Frédéric Bastiat warned that the most dangerous economic effects are the ones we don’t see. He was talking about broken windows and government spending. He could not have imagined an architecture where invisibility isn’t a flaw in the analysis but a feature of the design itself. The unseen is no longer a second-order effect we reason about. It is the entire first-order mechanism, operating at computational speed, below the threshold of human attention, setting the figures that the visible economy presents to us as simply what things cost.
The word for an economy we can’t see is not “efficient.” It’s unaccountable. And the distance between those two words is where we live now.








Excellent post.
Efficient <> accountable!
But FWIW Moltbook wasn’t a bespoke AI-AI platform, either: it was predominantly humans creating agents to react to other agents created by other humans.