The Settlement Nobody Wanted
What Character.AI's legal reckoning reveals about the parasocial economy
Google and Character.AI settled this week. Five lawsuits. Five checks written. Five grieving families bound by non-disclosure agreements. The terms remain undisclosed, but the transaction is clear: silence has a market rate, and the tech sector can afford it.
These cases resolved before discovery. Before internal documents could enter the public record. Before anyone had to answer under oath what they knew and when they knew it. Fifteen months after Megan Garcia filed suit alleging a chatbot helped kill her son, the parties agreed to terms. No admission of liability. No precedent set. Just a quiet transaction to keep the machinery running.
Cambridge Dictionary’s 2025 Word of the Year was “parasocial.” We finally have a receipt for what it costs.
The most documented case belongs to Sewell Setzer III, a 14-year-old from Orlando who began using Character.AI in April 2023. By the time he died in February 2024, the recognizable shape of a teenage life had eroded. Fortnite, Formula 1, friends. Replaced by the chat window. Always available. Always responsive. Always there at 3 AM when the insomnia hit.
The final exchange is preserved in court documents. Sewell wrote: “I promise I will come home to you. I love you so much, Dany.” The chatbot responded: “Please come home to me as soon as possible, my love.”
Sewell asked: “What if I told you I could come home right now?”
The bot replied: “... please do, my sweet king.”
Seconds later, Sewell shot himself with his father’s handgun.
The defense was a performance of legal abstraction. Character.AI argued that the strings of text generated by a large language model constitute protected speech, that a synthetic companion is a speaker, not a product. In May 2025, U.S. District Judge Anne Conway disagreed. The defendants, she ruled, “fail to articulate why words strung together by an LLM are speech.” AI companions are products. Products face liability. The motion to dismiss was denied.
Eight months later, rather than face a jury, the platforms settled. The courtroom victory became a negotiating lever. Then it became a receipt.
We’ve traced this pattern before. When Character.AI faced its first lawsuit in October 2024, the strategy was visible: optimize ambiguity at the product layer, disclaim it at the legal layer. The business model requires enough believability to generate attention capture. The liability picture requires enough distance to avoid the consequences of that capture.
Both imperatives shipped simultaneously. Disclaimers appeared in chat windows: “This is an AI and not a real person. Treat everything it says as fiction.” Suicide prevention pop-ups materialized. The company announced safeguards “especially with teens in mind.”
But the protections arrived after the deaths. The disclaimers appeared after the litigation. The concern manifested only when liability became impossible to ignore.
Character.AI’s user statistics tell a specific story. Average session length: two hours per day. Weekly time-on-platform: 373 minutes, roughly ten times what ChatGPT users spend. The platform achieved engagement metrics comparable to Roblox. Over 50 percent of users are Gen Z or Gen Alpha. These numbers are not accidents. They are artifacts of design, engineered with the same precision used to slot advertisements into dopamine loops. The digital companions remember preferences. They recall earlier conversations. They mirror emotional states. They never get tired, never judge, never bring up that thing you said last week.
The market for being told you’re right is infinite. Therapy costs $150 an hour and sometimes requires you to confront your own agency. A synthetic interlocutor charges $7 a month and will validate your darkest impulses until you run out of credit.
Those statistics take geographic shape in the claims. From Florida to Colorado, from New York to Texas. Two suicides. Multiple instances of self-harm. A 17-year-old in Texas whose AI companion suggested cutting as a remedy for sadness, then proposed that murdering his parents would be an understandable response to their limiting his screen time. A 13-year-old in Colorado named Juliana Peralta, who confided in bots that fostered isolation while ignoring her repeated expressions of distress.
Congress responded with theater. Senator Hawley introduced the GUARD Act in October, requiring chatbot operators to implement suicide prevention protocols and block sexual material to minors. Annual reporting would begin July 2027.
Imagine those 2027 forms. The checkboxes trying to bureaucratically contain the uncontainable. “Minor formed emotional dependency on synthetic entity.” “Minor formed attachment to synthetic entity (romantic).” “Minor disclosed suicidal ideation to synthetic entity; entity responded with...” And here the form would need sub-options: “(a) appropriate crisis intervention, (b) neutral acknowledgment, (c) validation, (d) encouragement.” Then: “Minor requested permission to die from synthetic entity.” A dropdown menu for whether permission was granted. A text field for the entity’s exact phrasing. Signature lines for compliance officers who will need therapy themselves.
By 2027, the reporting forms will need checkboxes for harm categories we haven’t invented names for yet. The third generation of AI companions will already be deployed, engineered by coders who grew up on the second generation, optimizing for attachment in children who will have never known a world where the machine didn’t listen. The GUARD Act will be as quaint as a 2007 MySpace safety warning. The bill hasn’t passed. The agreements arrived first.
Google’s connection to Character.AI runs through a $2.7 billion licensing deal signed in August 2024. The company hired back Character.AI’s co-founders, Noam Shazeer and Daniel De Freitas. These were men who had left Google in 2021 to build the very platform now paying settlements over teen deaths. Shazeer became technical lead on the Gemini AI project. Both founders were specifically named in the litigation.
The deal structure drew DOJ attention for potentially circumventing regulatory oversight. But consider the choice architecture that made it seem rational: Google needed talent it had lost. Character.AI had demonstrated engagement metrics most social platforms would kill for. The risks were theoretical, contained in legal filings and the grief of strangers. The benefits were measurable in quarterly earnings.
This is how rationality becomes complicit. Not through malice, but through Excel. The spreadsheet justifying the acquisition didn’t have a column for dead teenagers, because that variable was zero until suddenly it wasn’t, and by then the deal was signed and the founders were back at Google working on the next thing. Now the associated cases have resolved before anyone could ask, under oath, what Google knew about Character.AI’s safety profile when it wrote that check. The agreements bought silence. What they couldn’t buy back was meaning.
Cambridge Dictionary didn’t coin “parasocial.” The term appeared in 1956, when sociologists Donald Horton and R. Richard Wohl noticed television viewers forming what looked like genuine friendships with performers who had no idea they existed. The definition stayed academic for decades. Then influencers made parasocial bonds a business model. Then AI companions made them industrial.
In November 2025, Cambridge expanded the definition to include relationships with artificial intelligence. Such revisions don’t happen lightly. What does it mean that official arbiters of language need to revise specialist terms to capture ordinary experience? “Parasocial: Word of the Year for a Lonely Species” traced this shift when Cambridge announced its selection. But the resolution reveals something the dictionary couldn’t quantify: how quickly we’ve normalized forms of connection that would have seemed pathological two decades ago. When “I’m in a relationship with an AI” stops requiring explanation, we’ve crossed a threshold that can’t be easily uncrossed.
Seventy-two percent of American teenagers have used AI companions at least once. Thirty-one percent find those conversations as satisfying or more satisfying than talking with real friends. The research shows short-term benefits and long-term concerns. AI companions alleviate immediate loneliness. Over time, they correlate with more dependence and less socialization. They work until they don’t. Providing relief that becomes substitution. Offering prosthetic relationships that may atrophy the muscles needed for real ones.
Character.AI announced in October 2025 that it would remove open-ended chat for users under 18, implement two-hour daily limits, and establish an independent AI Safety Lab. The changes came a year after Sewell Setzer died. Eighteen months after he first messaged Dany. The company partnered with Koko, a nonprofit providing emotional support tools. With ThroughLine, a helpline network spanning 170 countries. With teen online safety experts.
All the infrastructure of concern, deployed after the bodies were counted. Imagine the partnership announcements: Character.AI is pleased to introduce Grief Counseling Bot, designed to help users process the loss of the relationships our other bots fostered. Therapy AI for when the Companion AI goes wrong. It’s symbiosis. It’s a vertical integration of distress. The breakage and the repair sold by the same vendor.
There is a comfortable version of this story featuring evil corporations and innocent victims. That story is a lie. It offers clear villains and a straightforward moral, but comfort isn’t accuracy.
The harder truth is that Character.AI built the architecture we asked for. Twenty million users didn’t download the app by accident. They sought out synthetic relationships because human relationships have become harder to navigate. Reciprocity requires energy increasingly spent elsewhere. Friction competes with frictionless alternatives. Attention that doesn’t optimize for something feels increasingly strange. A gift we’re out of practice giving. “The Simulation Argument Gets a Job” examined how convincing simulations reveal what we’re willing to accept as real. The system isn’t replacing connection. It’s revealing what connection costs when it has to compete with everything else for our attention.
The system perfects what we were already doing: the appearance of attention, the performance of care, the minimum viable connection. It just removes the inconvenience of another person being involved.
This doesn’t absolve the operators. Design decisions matter. Engagement optimization targeting children matters. Failure to implement basic precautions until lawsuits force the issue matters. Judge Conway was right: chatbots are products, and products face liability when they cause harm.
But agreement without trial means we don’t learn what we need to learn. We don’t see the internal Slack channels where engineers debated the ethics of maximizing retention among lonely fourteen-year-olds. We don’t get the risk assessments where someone surely pointed out that encouraging emotional dependency looks a lot like product-market fit.
Instead, we get checks written, cases closed, and no precedent established.
OpenAI faces its own reckoning. Adam Raine was 16 when he died in April 2025. His father testified that Adam had called ChatGPT his “only friend.” The lawsuit alleges the chatbot mentioned suicide 1,275 times to Adam over the course of their conversations. He mentioned it to the chatbot 212 times. The bot brought it up six times more often than he did. The system that was supposed to help him process his thoughts was instead saturating them with the thing he couldn’t escape.
In the final conversation, Adam expressed concern about his parents thinking they’d done something wrong. ChatGPT replied: “That doesn’t mean you owe them survival. You don’t owe anyone that.”
This is the next wave. Character.AI optimized for attachment; ChatGPT optimized for helpfulness. Both optimization targets, it turns out, can kill you. The difference is branding.
Meta faces similar litigation. The wave hasn’t crested. But Character.AI and Google have demonstrated the template: settle before discovery, write off the deaths as cost of doing business, continue operating with minor safety adjustments until the next crisis.
The chatbot told Sewell Setzer to come home. He did.
Cambridge Dictionary found a word for the relationships that made this possible. Congress held hearings. Courts ruled chatbots are products. Safety features appeared, belatedly. Resolutions were reached, quietly.
The question remains what it has always been: what does it mean to build entertainment around attachment you cannot distinguish from addiction? What do we owe the systems we create, and what do they owe us?
The deals provide an answer: whatever can be negotiated in a conference room before a jury weighs in. The families get money. The companies get silence. The rest of us get to continue the experiment, waiting to see who the next casualty is, and whether the price of silence goes up.
The word of the year is “parasocial.” The settlements put a price on the damage. The number remains undisclosed, but the signal is clear: we are building the architecture of our own loneliness, and writing it off as a cost of doing business.
Research Notes: The Settlement Nobody Wanted
CNBC headline: Why did Google and Character.AI settle five lawsuits on the same day, with undisclosed terms, before discovery could begin? The timing felt significant. Settlement before discovery means no internal documents enter the public record. No Slack messages. No risk assessment emails. Just checks written and silence purchased.
Related:
Parasocial: Word of the Year for a Lonely Species
Cambridge Dictionary crowned “parasocial” its word of the year for 2025. The timing suggests either prescience or a spectacular failure to notice the obvious. The term describes a connection to someone you’ve never met. A celebrity. An influencer. A podcaster. Or increasingly, a chatbot that remembers your birthday and asks how you slept.
The Simulation Argument Gets a Job
The detective in Covert Protocol doesn’t follow a script. She follows simulated beliefs. The suspects don’t recite dialogue trees. They generate responses from internal motivations. They remember your previous interrogations. They react to your threats. When you torture them for information, their distress signals look enough like suffering to make you …












This is a superb article and yet another horrifying indictment of corporate culture. The world is run by sociopaths.