The Patriot Test
What happens when a government blacklists its own best technology for refusing to be obedient
Consider the compression: a decade of institutional decay packed into four hours on a Friday afternoon.
The President declared a vendor a national security threat. The Defense Secretary followed on X. Before markets closed, the contract had been reassigned to a rival who admitted the timing “looked opportunistic.” The timeline is the tell. The apparatus produced a national security designation, an executive order, and a replacement contract in the time it takes to watch a movie. It was a loyalty test administered by stopwatch.
The “supply chain risk” framework was calibrated for foreign adversaries, designed to sever dependence on technology that answers to a hostile state. The logic was straightforward: if the company that made your equipment answers to Beijing, your equipment answers to Beijing.
The Pentagon built that apparatus to keep foreign governments out of American technology. Then it deployed that apparatus to demand unrestricted access to American technology. Which is either irony or consistency, depending on how you read the word control.
Pointing this apparatus at a San Francisco company requires a redefinition of “adversary” that the framework’s authors never intended, or perhaps exactly what they feared. The category has mutated. “Adversary” no longer designates loyalty to a hostile state. It designates the absence of sufficient loyalty to the current one. The machinery built to exclude foreign threats has been repurposed to enforce domestic obedience.
That this mutation generated no institutional objection is worth sitting with. Not a constitutional crisis, not a scandal with hearings, not even a committee letter. A Friday news cycle and resumed talks by the following week. An apparatus built to protect Americans from foreign surveillance had been pointed at an American company for refusing to enable domestic surveillance of Americans, and the system processed it as a contract dispute.
Anthropic is a San Francisco company. Its AI model ran in the Pentagon’s classified networks under a $200 million contract. The Pentagon praised its capabilities; they used it to help capture Nicolás Maduro in January 2026. The reason for the designation is documented: Anthropic refused to remove two restrictions from its contract. Claude could not be used for mass domestic surveillance of Americans, and could not power fully autonomous weapons systems. Everything else, Anthropic accepted.
The undersecretary of defense for research and engineering, Emil Michael, gave Anthropic until end of business on a Friday to comply. Anthropic declined. Michael then called Anthropic’s CEO “a liar” with a “God complex,” accused him of wanting to personally control the U.S. military, and the designation followed.
For an American company to receive this label is, as far as anyone can establish, something new in kind. The designation exists for adversaries. Deploying it against a domestic vendor whose actual offense was negotiating contract terms isn’t security policy. It’s punishment policy. The message to every other American tech company is legible: negotiate means accept. The alternative is to be treated like a Chinese intelligence front.
The dispute compressed, at its center, to three words: all lawful purposes.
The Pentagon wanted Anthropic to agree that Claude could be used for any lawful purpose. Anthropic wanted explicit contractual prohibitions on the two specific use cases it found unacceptable. OpenAI accepted the all lawful purposes standard. Legal analysts subsequently noted the distinction: OpenAI’s contract gives the government a promise not to break the law. Anthropic wanted the law to be the floor, not the ceiling.
The “all lawful purposes” formulation rests on a convenient fiction: that the law is a ceiling rather than a floor.
History suggests the law is more like a hallway, walls that can be moved by whoever holds the blueprints. Anthropic’s refusal was an attempt to bolt the doors shut. The Pentagon’s demand was an insistence on keeping the hinges oiled.
Representative Sam Liccardo, speaking before a committee vote on an amendment that would have protected AI companies from Pentagon retaliation for maintaining safety restrictions, put it plainly: “There is no law. The law is years behind the technology.” The amendment failed. Nobody filled the gap with legislation. They filled it with a contract.
“Lawful” is not a stable category. It shifts with whoever operates the machinery. Anthropic’s contractual prohibitions would have outlasted any particular administration’s interpretation of the law. That appears to be precisely what the Pentagon found unacceptable: not only what the restrictions prevented, but the fact that they were restrictions at all, inscribed in a contract that no subsequent attorney general could reinterpret.
OpenAI accepted the standard. Its CEO told staff that the company shared identical red lines to Anthropic, that it believed in the same principles, that the decision was about outcomes not obedience. He said this on the same day he acknowledged the deal “looked opportunistic and sloppy.” Both things may be true. Neither changes the structural outcome, which is that one company holds the restrictions on paper and the other holds them in conscience.
The personal dimension is harder to assess, and more revealing for it.
Dario Amodei sent a 1,600-word internal memo to his staff after the Friday events. It leaked. A CEO stages a moral stand for an internal audience. The staging leaks with characteristic precision. The apology for the leak eclipses the substance of the conflict.
To inhabit that logic: you are a CEO who has just refused a government contract on ethical grounds. You write 1,600 words of careful moral reasoning for your staff. You know, because you are sophisticated, that every internal memo of consequence gets screenshotted and leaked. The performance of sincerity is the only available mode when your audience is simultaneously your employees, your future regulators, and the journalists who will quote the leak. The unsettling possibility is that the performance might be entirely genuine. The memo is both completely genuine and completely performative. It is sincerity staged for an audience of millions, which is the only way sincerity happens now.
This is how accountability performs in the algorithmic age: sincere enough to generate headlines, fluid enough to evaporate before the next meeting. The memo was real. The apology was real. The resumed talks were real. They are all simultaneously real, which is different from any of them mattering in the way the memo’s rhetoric implied they should.
The memo’s substance is also worth noting. Amodei characterized OpenAI’s deal as roughly twenty percent genuine safety commitment and eighty percent performance, said the public messaging around it was “straight up lies,” and argued that OpenAI had prioritized keeping its employees comfortable over actually preventing abuses. He also offered his own theory of the dispute’s political substrate: that the Trump administration’s objection to Anthropic was substantially explained by the fact that the company hadn’t donated to Trump or offered political praise, while OpenAI had done both.
The numbers are in the public record. OpenAI’s president donated twenty-five million dollars to Trump’s MAGA Inc. super PAC in September 2025. OpenAI’s CEO donated a million dollars to Trump’s inaugural fund. Anthropic’s CEO did not. In Amodei’s reading, “supply chain risk” measures not security exposure but loyalty. Patriotism, in this accounting, is denominated in contribution receipts.
Emil Michael responded to the leaked memo by calling Amodei a liar again. Amodei apologized for the leak. Talks resumed within the week.
The market rendered its own verdict. ChatGPT uninstalls surged nearly three hundred percent. Claude hit the top of the App Store. A movement called QuitGPT claimed a million and a half users pledging to delete the app. The government and the market were voting simultaneously, in opposite directions. One transaction was political; the other was personal. Guess which one people felt they could actually influence.
At the moment of the blacklisting, Claude was operationally embedded in classified military networks supporting U.S. Central Command in the Middle East. Hours after the President ordered every federal agency to immediately cease use of Anthropic’s technology, that technology was running active military operations in Iran. It was not being wound down. It was being used. The designation carried bureaucratic weight but zero operational consequence. The machinery of state produced a press release and a contract, then kept right on running.
The Pentagon has officially designated Anthropic a national security risk. Anthropic’s technology is running a national security operation. Hold these two propositions in your mind simultaneously and notice what happens: nothing happens. The contradiction produces no crisis, at least while the conflict continues and the classified systems stay embedded.
Picture the internal logic as a flowchart. Box one: Company refuses to remove contractual limits on surveillance and autonomous weapons. Box two: Company designated foreign-style threat to American security. Box three: Company’s AI continues running the war. The flowchart fits on a single page. No one in the chain of command appears to have found it worth pausing over. The machinery produced a designation on Friday afternoon. The same machinery kept running its classified operations Friday night. Self-parody at scale looks exactly like this: indistinguishable from policy, because it is policy.
This is the kind of paradox that tends to resolve through negotiations. Which is what happened: despite the designation, despite the legal threats, despite Michael calling Amodei a liar in public, the two sides resumed talking within days. Operational reality has a way of overriding political theater when the alternative is pulling the AI out of an active war.
What does it reveal that app deletion became the available meaningful gesture? A movement called QuitGPT gathered a million and a half people around a shared act of removing software. The political relationship between citizen and AI governance has no clear handle, no lever, no address. The consumer relationship does: it has a delete button. That a million and a half people reached for that button as their most coherent civic act says something about how participation has been recoded, that the legible relationship, the one with visible feedback, is the commercial one. The consumer and the citizen have been running together for so long that people now reach instinctively for the one that responds.
The four-hour timeline is its own kind of signal. The machinery’s slowness was by design. Courts, agencies, and interagency processes exist specifically to prevent hasty decisions about technology embedded in critical systems. On a Friday afternoon, the entire apparatus produced a national security designation, an executive order, and a replacement contract in roughly four hours. The machinery that was supposed to be a brake became an accelerant. That this produced no institutional objection suggests the machinery’s slowness was always a feature of its neutrality, not of its independence.
The arithmetic is brutal. Two companies, identical capabilities, divergent donation histories. The variable separating “partner” from “risk” was the check.
What cultural shift produced this? Not the event itself: political punishment of private companies is as old as politics. What’s new is the vocabulary. “Supply chain risk” borrowed its credibility from years of careful work building a genuine security apparatus around genuine adversaries. That credibility became transferable, deployable against domestic companies that decline to perform fealty. The apparatus didn’t fail. It was repurposed. And repurposing a national security designation as a negotiating tool produces exactly the response visible here: four hours of news cycle, a leaked memo, and resumed talks. Not the response a genuine security crisis generates. The response a business negotiation generates when one party has escalation tools the other lacks.
We are building a world where the safety of our tools is determined by the obedience of their makers.
Patriotism is no longer a sentiment. It is a transaction cost. We used to call this security. Now we just call it procurement. The difference is semantic. The consequence is not.








