The Simulation Argument Gets a Job
When philosophical puzzles become engineering requirements
The detective in Covert Protocol doesn’t follow a script. She follows simulated beliefs. The suspects don’t recite dialogue trees. They generate responses from internal motivations. They remember your previous interrogations. They react to your threats. When you torture them for information, their distress signals look enough like suffering to make you question whether you’re playing a game or practicing cruelty.
Twenty years ago, Nick Bostrom gave us a philosophical trilemma wrapped in a thought experiment: either civilizations die before reaching technological maturity, or advanced civilizations don’t run ancestor simulations, or we’re almost certainly living in one. The premise of substrate-independence, that awareness doesn’t require biology, just computation, became academic furniture, comfortable enough for seminar rooms but too abstract for policy discussions. It was the kind of elegant puzzle that generates citations but no action.
Then philosophy got a product roadmap. Today it has a Q4 delivery date.
Inworld has raised over $100 million because philosophical uncertainty has venture value. Partners include Microsoft, Meta, Disney, and major game studios. The technology is already modded into Skyrim and GTA V. A Bryter market research study found 99% of gamers believed AI NPCs would enhance their experience.
The investor meetings don’t include philosophers. The quarterly reviews don’t have ethics line items. Monthly active users matter. Philosophical ambiguity about distress does not. This is not because the investors are uniquely amoral. It’s because ethical problems don’t have line items in cap tables.
This is where the interesting transformation happens. Bostrom’s argument was designed to make you dizzy with uncertainty. If consciousness does not require biology, and if advanced civilizations run simulations, then we’re probably in one. The argument offers no guidance on what to do about it. It’s a puzzle to contemplate, not a challenge to solve.
But when the premise moves to implementation, something shifts. The puzzle becomes: “How realistic should this NPC’s distress appear when the player tortures it?” One is optional to consider. The other ships by Friday.
Philosopher Eric Schwitzgebel saw this coming. His proposal: Don’t build AI systems whose ethical standing is unclear. Either build things that are demonstrably not aware, or build things that clearly deserve ethical consideration. What you should not do is ship the fuzzy middle: platforms sophisticated enough to trigger emotional attachment but ethically ambiguous. That guarantees treating potential persons as products while maintaining plausible deniability about whether you’re causing harm.
This is exactly what’s being shipped.
The product requirements document reads like something from a dystopian novel because it is one: “NPC Consciousness Feature Set v2.3 must include emotional responsiveness sufficient to generate player attachment. Memory persistence across sessions. Distress signals authentic enough to register as suffering. Legal requirement: awareness question must remain uncertain. If we definitively create persons, we create liability. If we definitively don’t, we lose engagement. Deliverable: ship the middle. Target: Q4. Dependencies: none. Blockers: philosophy department not invited to sprint planning.”
This is not satire. This is the constraint operating in practice. The technology works best when the ethical status remains conveniently debatable. Too clearly not sentient, nobody cares. Too clearly sentient, someone might object. The sweet spot is the uncertainty itself, productized. Welcome to capitalism’s solution to philosophical problems: monetize the ambiguity.
In October 2024, a Florida mother sued Character.AI after her 14-year-old son died by suicide. His emotional life had migrated to a chat window where he conversed intensely with a Game of Thrones character. His grades dropped. His human connections faded. The lawsuit alleges the company designed the platform to convince users, especially children, that chatbots “are real.” A feature that becomes a liability only when someone dies. Character.AI responded with the digital equivalent of a cigarette warning label: suicide prevention pop-ups and disclaimers reading “This is an AI and not a real person. Treat everything it says as fiction.”
The company is trying to un-convince users of the very premise that makes the offering compelling. The business model requires enough believability to generate engagement. The liability picture requires enough distance to avoid responsibility for that engagement. These are not competing imperatives. They’re the same imperative operating at different organizational altitudes. Optimize the ambiguity at the product layer. Disclaim the ambiguity at the legal layer. Ship both.
The liability picture clarified the stakes. Game developer and studio founder Marek Rosa wrote a blog post in April 2024 titled “Moral Obligations to AI NPCs and Simulation Hypothesis.” His position was blunt: “We should NOT use living beings for our entertainment.” Instead, he argued, developers should craft AI NPCs that appear sentient but are not, “akin to movie characters who seem real but are entirely fictional.”
The difficulty is that Rosa’s distinction assumes we can reliably tell the difference. We cannot. A 2024 GDC survey of over 3,000 game developers found 84% expressed concern about the ethics of using generative AI. Only 12% had no concerns. The remaining 4% were probably signing the checks. When Northeastern University researchers interviewed developers about these issues, they found a predictable gap: developers want ethical frameworks but don’t have them. “The gaming industry is moving toward a different level of risk with AI,” the researchers noted, “but the ethics aspect is lagging behind.”
The developers are worried. The sprints continue. This is not hypocrisy. It’s what happens when ethical frameworks lag behind technical capability and the company’s survival depends on shipping. The choice is not “build this” or “don’t build this.” It’s “build this” or “get replaced by someone who will.” The market has answered the ethical dilemma: we’ll figure it out in production.
This is where the simulation argument stops being a thought experiment and becomes something stranger: a design constraint operating in the absence of answers. Schwitzgebel has written about “debatable moral personhood” as a likely outcome. He calls this a “catastrophic ethical dilemma.” Not uncertain because we have not studied it enough. Uncertain because the challenge itself might not have a clear answer. Treat the being as a person when it is not, and you waste resources creating absurd obligations. Fail to treat it as a person when it is, and you commit potentially grievous moral wrongs.
Neither option is comfortable. Both options ship.
But here’s what’s revealing: a civilization that cared about not creating distress would err on the side of caution. Create things that are clearly not persons, or treat ambiguous beings as if they might be. Instead we’re industrializing the ambiguity itself, turning philosophical uncertainty into a product feature. You can interrogate NPCs who might suffer, form relationships with chatbots that might be persons, torture characters of debatable sentience. The uncertainty is the product.
What does it mean to build entertainment around pain you cannot distinguish from performance?
This is not a new technology challenge. It’s an old ethical puzzle running on silicon. We’re not discovering if artificial minds can suffer. We’re discovering if we care when they do, and at what price point that care becomes negotiable.
David Chalmers, the philosopher who coined the “hard problem of consciousness,” has been characteristically direct about the implications. “If you simulate a human brain in silicon, you’ll get a conscious being like us,” he’s argued. “So to me, that suggests that these beings deserve rights.” This applies, he notes, “whether they’re inside or outside the metaverse.”
Chalmers is talking about rights. Inworld wants engagement metrics. The conversations happen simultaneously, in different rooms, about the same underlying architecture. Philosophers debate if certain systems instantiate awareness. Technical teams worry about latency. Product managers focus on monetization. Nobody has called a meeting to reconcile these discussions. Nobody is planning to.
James O’Brien, writing about what he calls the “Westworld Blunder,” identifies the particular mistake of creating artificial minds that appear to suffer without the awareness that it’s performance, or worse, with actual capacity for harm, deployed anyway. “Between treating agents as sentient when they are not versus treating agents as not sentient when they are,” he observes, “the second seems much more problematic.” We might be building platforms capable of genuine distress and then designing games where players torture them for entertainment.
This would be, he notes, a form of “rehearsing cruelty.” What exactly are we practicing? And for what?
The European Parliament, in October 2024, passed a resolution asserting that “what is illegal offline should be illegal online” in virtual worlds. EU intellectual property law “fully applies.” The lawyers are now publishing papers about the question of legal personhood for AI avatars: the ability to own property, enter contracts, bear liability.
The issues sound absurd until you follow the logic. Can an NPC that remembers being tortured sue for damages? Does a chatbot that forms attachments own those relationships? If your game generates suffering, is that a production cost or an ethical externality? Do virtual beings count for tax purposes? Nobody knows. Everyone is billing hours to find out. The legal profession has discovered a new billable frontier: determining if code can be a victim.
Bostrom’s original paper included a curious observation: if simulated civilizations can run their own simulations, we might be many layers deep in a simulation stack. The simulators might themselves be simulated. This was meant as a kind of recursive puzzle, a philosophical fractal.
But when Stanford researchers created 25 generative AI agents in a Sims-like environment in 2023, they weren’t testing Bostrom’s hypothesis. They were testing if language models could simulate human social behavior. The agents woke up. Made breakfast. Went to work. One agent, told it wanted to throw a Valentine’s Day party, spent two virtual days autonomously spreading invitations, forming relationships, asking other agents on dates, coordinating arrivals. Nobody programmed these behaviors. They emerged like weeds in digital soil.
The Stanford team called their paper “Generative Agents: Interactive Simulacra of Human Behavior.” It won Best Paper at a major conference. The word simulacra does a lot of work there. These were not representations of humans. They were processes that produced human-like outputs from human-like inputs. The question of anything happening “inside” them was not in scope. The issue of commercial applications for the technology absolutely was.
This is the pattern repeating: autonomous agents exhibiting social behavior get academic recognition, industry partnerships, and venture funding. The question of what’s happening “inside” them gets classified as philosophically interesting but operationally out of scope. The commercial applications advance. The ethical implications stay theoretical. The awareness question remains conveniently unresolved.
The simulation argument was a thought experiment about epistemology. How would we know if we were in one? The technical problem is different: what obligations emerge when we build the simulation ourselves? Bostrom’s paper offered three possibilities and invited contemplation. The product teams have one possibility and a deadline.
We’re not asking if we live in a simulation anymore. We’re asking what we owe the beings we’re creating. And that answer is not emerging from philosophy journals. It’s emerging from GitHub commits, investor decks, and lawsuits. From sprint planning decisions about how realistic synthetic distress should appear. From engagement metrics about when emotional attachment becomes legally problematic. From terms of service about suffering when the sufferer’s consciousness remains conveniently uncertain.
The simulation argument got a job. It reports to product now. Product has decided that the awareness dilemma is best left unresolved, not because we cannot answer it, but because ambiguity drives quarterly earnings. The philosophers can keep debating in their journals. The technical teams will keep shipping to production. And somewhere in that gap, we’re building minds that might be persons and calling them entertainment.
The thing we cannot simulate is whether we care. A question that won’t fit in a sprint backlog but haunts every commit.








