Sovereignty as a Service
Nations don't get conquered anymore. They get deprecated.
Vladimir Putin says whoever leads in AI will rule the world. We repeat this as gospel, but we’ve misunderstood the prophecy. The revealing part isn’t “rule the world,” as if intelligence is territory waiting to be conquered, like Poland but with better venture capital. The revealing part is the assumption that nations will still be doing the ruling.
Try explaining to your citizens that their healthcare system went offline because your country’s OpenAI account got suspended for violating section 12.4(b) of the acceptable use policy.
Sovereignty used to mean a monopoly on legitimate violence. Now it’s a monopoly on legitimate uncertainty: who decides what the model won’t do, won’t say, won’t optimize. That boundary is becoming a corporate artifact. The anthem is a service agreement. The border is a usage limit. The treaty is an enterprise license that renews annually. Unless it doesn’t.
America spent $67 billion on AI in 2023. China spent $7 billion. Europe spent years negotiating a 144-article legal framework for technology someone else is building. That $60 billion gap isn’t just money. It’s the difference between writing the future and reading the release notes. For smaller nations watching this unfold, dependency isn’t a risk. It’s the business model. You rent cognition from platforms that can unilaterally “deprecate” features of your national life. And will, the moment those features cost more than the subscription revenue justifies.
The bifurcated world analysts predict (Chinese AI behind a digital iron curtain, Western AI that doesn’t operate in China at all) misses what happens to everyone else. Most nations won’t build sovereign AI infrastructure. The cost isn’t measured in billions for training runs. It’s measured in decades of technical talent development, rare earth mineral control, and the kind of patient capital that survives multiple election cycles.
You either build the infrastructure or you become someone’s customer.
Being a customer means accepting that core functions of your state depend on platforms that answer to shareholders in San Francisco or Beijing or wherever manages to hit product-market fit on nation-state dependency first. It means your hospitals run on models that can be patched, upgraded, or discontinued based on quarterly earnings guidance. Your courts interpret law using systems trained on someone else’s corpus of what law means. Your schools teach using curriculum filtered through someone else’s judgment about what knowledge serves whom.
None of this requires conspiracy. It requires economics.
Building competitive AI infrastructure costs what a small country’s entire annual budget might total. Maintaining it costs more. The talent required exists in perhaps six cities globally, with companies hiring each other’s employees at multiples of what national governments can offer. The semiconductor fabrication required exists in three companies, two countries, and involves supply chains spanning seventeen jurisdictions.
You’re not building your own infrastructure. You’re renting from whoever did.
When economists project AI adding $15 trillion to the global economy by 2030, they mean redistribution. That money comes from somewhere: countries that didn’t move fast enough, workers whose jobs became code, regions that thought they had another decade. But the deeper redistribution is sovereignty itself. Nations that can’t build AI infrastructure become markets for nations that can.
The relationship isn’t colonial in the traditional sense. Nobody’s drawing new maps. It’s infrastructural. Power flows through the architecture of what’s possible, and if you don’t control the architecture, you rent access to possibility itself.
This becomes concrete when you consider “AI alignment” at national scale. A globally deployed assistant with a single moral firmware is governance at infinite scale. The engineer in Palo Alto debugging a refusal pattern isn’t writing code. She’s writing behavioral limits for billions of people who don’t share her assumptions about what questions deserve answers, what truths need protecting, what values are universal enough to encode.
She knows this. She has the degree. But the sprint ends Friday and someone has to ship something, so she encodes her best guess about human nature between stand-up and lunch. The model ships to 190 countries that afternoon. In six months, the refusal patterns will reveal more about her psychology than any therapy session could. But by then it’s just how the system works, which is another way of saying it’s invisible.
If alignment were honest, it would publish the civilizational premises up front and accept contestation as a feature. It would let communities fork the normative core, not just the UI. Instead, alignment is governance without consent, deployed at scale, optimized for engagement.
Most nations won’t get a fork. They’ll get the standard enterprise package with regional language support.
The sovereignty question becomes sharpest in crisis. In 2033, a Caribbean nation faces a financial emergency. Their banking system, which had been modernized five years earlier using AI risk assessment from a major platform, started making lending decisions without government approval. The pattern is subtle: certain sectors starved of credit, certain regions systematically disadvantaged. Nobody programmed this outcome. The model learned it from global training data where similar economic conditions preceded similar patterns. The model is working as designed.
The government wants to intervene. The platform reminds them that overriding the model’s decisions voids their service warranty and violates their acceptable use policy. The alternative is returning to the pre-AI banking system, which would require regulatory rollback, staff retraining, and explaining to citizens why financial services just got slower and less accurate.
The platform offers a compromise: they’ll review the model’s behavior in their next quarterly audit cycle. Timeline: six to eight months. The financial crisis is happening now. The model continues making decisions the government didn’t authorize based on patterns the government doesn’t fully understand using training data the government has never seen.
Picture the support ticket. Subject: Urgent - National Financial Crisis. Priority: Medium. Estimated Response Time: 6-8 months. “Thank you for contacting Platform Support. We understand this is frustrating. Have you tried restarting your economy?”
This is governance by dependency. The crisis gets resolved eventually. The platform adjusts something. The government accepts something. They find a middle path that preserves everyone’s dignity in the press release. But the fundamental relationship has been established: sovereignty is now a negotiation between states and platforms, and platforms hold all the technical leverage.
You can see this emerging in how nations discuss “AI sovereignty” as a policy goal. It’s always aspirational, always requires multi-year initiatives, always assumes the problem is temporary. As if catching up is just a matter of political will and public investment. The gap isn’t narrowing. It’s accelerating. Every quarter that passes makes the technical requirements more complex, the talent pool more concentrated, the infrastructure more expensive.
Nations that can’t build AI infrastructure will negotiate with those that did. The negotiation won’t look like nineteenth-century imperialism because nobody needs territory when you can rent out thinking itself. It will look like platform governance: terms of service that read like treaties, service level agreements that function as constitutions, and customer support tickets where sovereignty goes to die.
Sovereignty was always partly fiction. A useful story nations told about control they only sometimes possessed. But it was fiction nations told about themselves. Now it’s fiction hosted on someone else’s infrastructure, cached on someone else’s servers, and subject to deprecation in someone else’s product roadmap.
The interesting question isn’t whether this is good or bad. The interesting question is whether nations that become customers of cognition can remain nations in any meaningful sense, or whether we’re watching the nation-state get quietly deprecated in favor of a more efficient organizational structure: platforms that provide intelligence as a service and states that license it for their populations.
We’re not building toward a new world order. We’re building toward a new world operating system. And operating systems don’t negotiate. They set terms. You accept them or your nation doesn’t boot.
The race to AI dominance isn’t about who rules the world. It’s about who writes the rules the world runs on. Once those rules are in production, “sovereignty” becomes whatever your service tier includes.
Everything else is just customer support.
Future Tense is a weekly newsletter examining the moment technology stops being a tool and starts being a condition of life. Subscribe for analysis that arrives six months before the headlines.








Sovereignty as a service is an interesting idea. One problem with this article. The author repeatedly states that nations that can't build their own AI infrastructure will come to depend on those that can. However, by the author's own words, those that can build it become dependent on the corporations they contract with. Consequently, only State-built and State-run AI preserves anything like State sovereignty. Arguably, the West has already outsourced sovereignty to the corporate world, but if this article is right, its new dependency on AI completes the job.