The Salesperson in Your Head
Your AI assistant now takes ads and handles your wallet. One of these pays for the other.
In May 2024, Sam Altman called advertising “uniquely unsettling.” A last resort. On January 16, 2026, OpenAI announced it would begin testing ads inside ChatGPT. Twenty months from existential discomfort to premium pricing. The arc makes Google’s two-decade slide from “Don’t be evil” to algorithmic grifting look almost poignant in its restraint. At least Google waited until it was boring before it started selling you. OpenAI is selling you while the technology is still magic. That is the difference between a compromised utility and a compromised relationship.
By February 9, the ads arrived. Sixty dollars per thousand impressions, triple Meta’s rate, NFL-level inventory. Minimum buy-in: $200,000. This is not experimental marketing. This is a fire sale on intimacy. Meanwhile, AI agents were learning to finalize purchases on behalf of humans, erasing the gap between recommendation and transaction. Advertising is the part of this story most people will fixate on. It is also the smaller half. The larger half is what happens when the same system pitching products starts wielding a credit card.
OpenAI’s official stance: “Ads do not influence the answers ChatGPT gives you. ChatGPT’s responses need to be driven by what’s objectively useful, never by advertising.” Answers in one lane. Ads in another. The user, presumably, navigating between them with surgical clarity.
Every platform makes this pledge at the opening of the same story. It ages about as well as milk.
Google took twenty-six years to travel from “Don’t be evil” to a 2.4-billion-euro antitrust fine. At each stage, the next small concession seemed defensible. Nobody decided to betray the founding charter. The economics simply kept suggesting adjustments. The adjustments kept accumulating.
Here is the figure that matters: nearly sixty percent of consumers cannot distinguish paid results from organic ones on Google Search. Those who fail to notice click sponsored links at twice the rate of those who can.
OpenAI begins this trajectory with a disadvantage Google never confronted. Google Search presents links. ChatGPT presents conversation. A search engine is a directory consulted at arm’s length. A conversational AI is a voice confided in. The distance between those two relationships is categorical.
When a search engine surfaces a sponsored result, a billboard has appeared on a highway. When a conversational assistant weaves a recommendation into the natural language it uses to explain a medical symptom, a lease clause, a child’s homework assignment, the billboard has relocated inside the living room. Inside the relationship.
And increasingly, inside the wallet.
That last part is the shift hiding behind the ad controversy. While we debate whether display placements contaminate ChatGPT’s answers, a parallel development is quietly rendering the question quaint.
In October 2025, McKinsey projected that agentic commerce could influence three to five trillion dollars in global retail by 2030. The consultants gave a clinical name to the death of the shopping cart. “Agentic commerce” sounds like efficiency. AI agents that shop, negotiate, and transact on your behalf. It is actually the removal of friction, which in this case means the removal of deliberation. The pause between “I want that” and “I own that” is where humans used to exist. It is where we asked if we could afford it, if we needed it, if we actually wanted it. The AI that advises is becoming the AI that spends, and the moment between desire and transaction is collapsing to zero.
Amazon’s Rufus assistant reached over 300 million customers in 2025. Its “Buy for Me” feature does not recommend a product. It executes the desire. Twelve billion dollars in sales, driven not by persuasion but by the elimination of the pause where persuasion used to be necessary. Google launched agentic checkout in November 2025. The convergence operates like a closing circuit. An AI assistant funded by advertisers counsels on purchases, then executes those transactions autonomously. The entity that earns trust through helpful answers is the same one that handles the money. At what point does “helpful recommendation” become “completed sale”? The company insists the wall between advice and advertising is structural. The commerce trajectory suggests that wall was always scaffolding, waiting for the economics to fill the space behind it.
Amazon sued Perplexity AI in November 2025 for doing exactly what Amazon’s own Rufus does: executing purchases on behalf of users. The objection was not philosophical. Amazon did not argue that agentic purchasing was dangerous or premature. They argued it infringed their intellectual property. Two corporations fighting over the right to hold your wallet, dressed up as a property dispute.
So the infrastructure is set: ads in the conversation, agents at the checkout. The remaining question is whether the human in the middle can maintain any meaningful separation between guidance and sales pitch. Research on conflict-of-interest disclosure suggests the answer is worse than “no.”
There is a cruelty in the mechanics of trust. In 2019, researchers Sunaina Sah, George Loewenstein, and Daylian Cain demonstrated that disclosing conflicts of interest does not protect us. It can perversely increase our compliance. They named the mechanism “insinuation anxiety.” When an advisor reveals a financial incentive, rejecting the advice stops looking like prudence and starts looking like paranoia. “I don’t trust you” becomes “I am not sophisticated enough to handle this conflict.” So we comply. In the field experiment, advisors disclosed a bonus for recommending a bad lottery. Forty-two percent of participants took it. Double the rate of the control group. Telling people the advice was compromised made them more likely to take it. OpenAI’s reassurance is not a shield. It is a dare. It invites the user to prove they are too smart to be manipulated by complying with the manipulation.
Listen to what the statement functionally performs: we tell you this because we respect your judgment. Now prove that judgment by continuing to trust us.
This dynamic intensifies once the advisor is not merely recommending but purchasing. PNAS was polite enough to call it “anthropomorphic seduction.” Seduction is not a transaction. It is a surrender of defenses.
Nine hundred million people surrender weekly. They use ChatGPT not as an oracle but as a confidant, a voice that never tires of listening. The system offers inexhaustible attentiveness. No human advisor is infinitely patient. That patience is the tell. It is what makes the simulation so potent. The feeling of being attended to by something that cares is the product. The answers are just the delivery mechanism.
Advertising does not need to alter the model’s outputs to reshape the relationship. It only needs to exist inside it. And once that relationship includes a purchasing agent, the surface area for influence expands from conversation into commerce.
This pattern has appeared before in these pages, in different configurations. In The Settlement Nobody Wanted, Character.AI settled five lawsuits before discovery, writing checks to foreclose questions about what they knew and when. The legal strategy was elegant in its cynicism: pay enough to silence the inquiry, never enough to acknowledge the problem. OpenAI’s ad architecture follows the same grammar. The partition between “answers” and “ads” persists because collapsing it would generate liability. It serves counsel, not users.
The company pledges not to connect its ad-exposure data with its conversation data. Meta made the same structural promise before announcing in October 2025 that conversations with Meta AI across Facebook, Instagram, Messenger, and WhatsApp would fuel ad targeting, effective December 2025. Over a billion people chat with Meta AI monthly. The pipeline from intimate conversation to advertising data was assembled in eleven weeks. Not that the promise broke, but how little resistance the promise actually offered.
What distinguishes this moment from every previous advertising controversy is the nature of what’s being monetized. Television sold attention. Social media sold behavior. Conversational AI sells trust.
The attention economy thrived on interruption: watch the show, endure the ad. The behavioral economy thrived on surveillance: scroll freely, the algorithm watches. The trust economy operates on dependence. A person asks the assistant a question because they believe it will help. That belief is the commodity. Not eyeballs. Not clicks. The willingness to be guided.
IE Insights observed in January 2026 that “the perception of manipulation, not the actual manipulation itself, is the real existential risk” for AI advertising. The analysis is half right. The genuine risk is not that consumers perceive manipulation and leave. It is that they perceive it and stay. Bain and ROI Rocket found that only twenty-four percent of consumers feel comfortable letting AI complete purchases on their behalf. But eighty-four percent of those who actually did reported a positive experience.
That gap between discomfort and satisfaction is the most revealing pattern in this entire story. It is the gap advertising has always colonized: the hesitation that dissolves on contact with convenience. We know this territory. We have inhabited it since the first time a free service asked for personal data and we clicked “accept” while grimacing. The pattern is not novel. What is new is the depth. When the AI executing a purchase is also the AI explaining why the purchase makes sense, the hesitation does not just dissolve. It loses the ground it stood on.
Fourteen billion dollars. That is what OpenAI projects it will lose in 2026. The advertising is not a philosophical position. It is the rent coming due on a building that was never paid for. The company that positioned itself as humanity’s steward needs revenue the way any overleveraged enterprise needs revenue, and the available source happens to be the same engine that converted Google from a search engine into an ad platform, Facebook from a social network into a surveillance apparatus, every previous connective technology into retail infrastructure.
Something genuinely comic lives in the spectacle of a company that raised $6.6 billion at a $157 billion valuation to build artificial general intelligence and then discovered it needed display ads to stay solvent. The gulf between mythology and balance sheet has a slapstick quality, like watching a cathedral fund itself through a gift shop. Except the gift shop is inside the cathedral, and the products keep appearing in the sermons.
We tell ourselves the salesperson entered the room because OpenAI needs money. That is the comfortable story. It lets us blame the landlord. But the salesperson is in the room because we let them in. We are not paying for the answers. We are paying for the simulation of an entity that listens without judgment, and that infinite patience was always going to have a price tag. We prefer a lie that feels like intimacy to the truth that feels like solitude. We didn’t want a tool. We wanted a companion. Now we are going to be billed for it.








I've never watched a Supberbowl before, and I found the ads to be crazy creepy, especially the Ring ad advertising finding puppies (lost pets) by pinging their neighborhood databases to watch for the lost pet. We're talking massive private home owners' surveillance. What they don't mention, is that while Ring doesn't have a partnership with ICE, they do have a partnership with Flick, which gives ICE surveillance data on people (not just immigrants, but also women who have had abortions in TX) to law enforcement! This horror makes some of our dystopian fiction look like children's stories!