Research Notes: The Last Shared Story
Note from AZ:
These Research Notes are usually behind the paywall… linked at the end of every essay for paid subscribers who want to see the pattern recognition with the scaffolding still visible.
Today they're unlocked, because the best way to show you what paid access looks like is to stop describing it and just open the door.
The notes below relate to the essay that landed in your inbox about thirty minutes ago. If you like what you find here, there's a 25% discount on paid subscriptions running for this weekend: Early Spring Special.
Eighty-one dollars. That’s what it costs to fine-tune a language model on a single author’s copyrighted works so convincingly that expert readers prefer the AI output over human writers. Found that in a Columbia-affiliated preprint on arxiv from October 2025 and it stopped me cold. Not because the technology is surprising (fine-tuning has been getting cheaper for years) but because the price collapses an assumption most people haven’t examined: that the economics of original creative work will hold. If a model trained on your style for less than a dinner out outperforms you in a blind test, the question isn’t whether personalised content is coming. It’s what happens to the stories we used to share when everyone’s feed is generating bespoke versions instead.
That question sent me across four industries in about a week. Here’s how it unfolded.
The Research Trail
The starting thread, oddly enough, was Netflix thumbnails. The original source is Netflix’s own blog, “The Power of a Picture,” from May 2016. A decade old, but the numbers still get cited everywhere. Netflix found that artwork constituted over 82% of a member’s browsing focus, and the platform had roughly 90 seconds to capture attention before a viewer moved on. Those figures come from studies conducted in 2014. The personalisation infrastructure has only deepened since, but Netflix hasn’t published updated figures with the same specificity. We’re all still citing ten-year-old data because the company stopped showing its hand.
From thumbnails, the logical next step was: who’s personalising the actual video? That led straight to Meta’s Movie Gen announcement from October 2024. The model is massive (30 billion parameters), and what it does is take someone’s uploaded photo and generate video where that person moves, talks, and acts, with their face intact throughout. Meta says it outperforms Runway, Luma, and Sora in their own benchmarks. Not yet a consumer product, but the capability demonstration is the point. The gap between “we can do this” and “we shipped this” is measured in business quarters, not technological breakthroughs.
Then I found something stranger. A defensive patent publication on the USPTO Technical Disclosure Commons describing “Variable Character Movies,” complete with a “Full-Generation Mode” where a viewer’s likeness gets integrated into professionally produced films via variable character slots. It’s not a product. It’s an architecture. But the specificity of the filing tells you someone has thought this through to the revenue model.
That was film. Games had already arrived further along the same path, and the trail picked up at CES 2025. NVIDIA’s announcement of ACE, the Autonomous Character Engine, shifted NPC design from scripted dialogue trees to autonomous characters with persistent memory and social reasoning. Already deployed in PUBG (the PUBG Ally co-playable character) and NARAKA: BLADEPOINT’s mobile release. The scale of investment behind this tells the story better than the tech specs. NPC behaviour modelling alone accounts for a quarter of AI gaming market revenue, and that market is projected to grow roughly fifteenfold by 2033 according to Grand View Research. More than half of game development companies have implemented generative AI, per the GDC 2025 State of the Industry survey. Capital at that scale doesn’t chase novelty. It chases demand.
Software came via Nielsen Norman Group’s definition of “Generative UI,” published March 2024, updated October 2025: a user interface dynamically generated in real time by AI, customised to each user’s context. Google Research followed in November 2025 with their own implementation using Gemini 3. One 2024 study on adaptive UIs reported systems achieving roughly 0.89 personalisation accuracy, reconfiguring interfaces in about 1.2 seconds. Not a top-tier computer science venue, but peer-reviewed and the methodology is sound.
Books were the last domain, and in some ways the most unsettling. Found a 2020 paper in Entertainment Computing (Elsevier) on adaptive storytelling systems that recognise narrative preferences with 91.9% accuracy based on Big Five personality modelling. Controlled study with a folktale, not commercial deployment, but the accuracy is striking. Platforms like NovelistAI are already generating fiction with adapted plot development and character arcs. And then there was the preprint I mentioned up top. The catch most coverage misses is that the reader-preference finding only applies to fine-tuned models. Standard in-context prompting still underperforms. But fine-tuning is getting cheaper by the month.
Source Evaluation
The strongest sources cluster around primary corporate disclosures and peer-reviewed work. Netflix’s blog, Meta’s research page, and NVIDIA’s official announcements are Tier 1: the companies describing their own capabilities, with incentives to showcase but reputational stakes that limit fabrication. The GDC survey is rigorous (3,000+ respondents, +/-2% margin of error) and Grand View Research’s AI gaming market report is the most commonly cited in the industry.
Where I had to be more careful: the “Gossip Protocol” concept for NPC social reasoning (where an NPC who distrusts you tells another NPC, and that NPC preemptively distrusts you without direct interaction) came from a tech blog, techplustrends.com. The underlying concept of gossip protocols in multi-agent AI is well-documented in academic literature (multiple 2025 arxiv papers cover this), but the specific NPC social-graph framing is editorial. I leaned on the broader academic foundation and presented the NPC application as a logical extension rather than established practice.
The Variable Character Movies filing is a defensive patent publication. It documents a concept, not a shipping product. Worth including because the architecture is plausible and builds on demonstrated capabilities, but it needed framing as “emerging” rather than “available.”
Extrapolation Mechanics
The core extrapolation follows a single thread: personalisation increases engagement, engagement increases retention, retention increases revenue. That logic applies identically across film, games, software, and text without any coordination between the four industries. The convergence is emergent, not designed.
Start with what’s already documented. Netflix personalises thumbnails per viewer. Meta has demonstrated identity-preserving video generation. A patent filing describes variable character integration into films. Now notice the pattern: each step extends the boundary of what gets personalised, from packaging (thumbnails), to casting (face insertion), to narrative (branching content). Games are already further along this path. Two people playing the same NVIDIA ACE-powered title inhabit separate social worlds with distinct NPC relationships. The canonical playthrough has vanished for those titles. If film follows the same economic incentive that games already acted on, narrative personalisation in streaming content is a matter of deployment timeline, not technical feasibility. Each step is small. The endpoint only seems dramatic when you see all four domains arriving at the same destination independently.
Contradictions and Complications
Three things that don’t fit cleanly and deserve honesty.
First, the accessibility argument complicates any simple critique of personalisation. Adaptive UI that reorganises touch targets for motor impairments, or adjusts typography for dyslexia, is not trivial. The mechanism that fragments shared experience is the same mechanism that makes software usable for people it previously excluded. Beneficence and atomisation share an architecture. That tension doesn’t resolve neatly and I don’t think it should.
Second, the Netflix data is from 2014 consumer research published in 2016. We’re all building arguments on a decade-old foundation because Netflix stopped publishing at this granularity. The numbers are probably conservative relative to current personalisation depth, but “probably” is doing a lot of work in that sentence.
Third, the Columbia copyright scholar’s reader-preference study is a preprint, not yet peer-reviewed and published. The methodology looks solid (preregistered, with a Columbia Law copyright scholar as co-author), but preprints are preprints. Worth citing carefully, not as settled science.
Terminology and Craft Choices
One phrase that kept surfacing and eventually carried real weight: “shared cultural substrate.” Went back and forth on alternatives. “Common references” felt too shallow, like it only meant knowing the same movie quotes. “Collective memory” sounded too academic and too static, as though culture gets fixed in place rather than continually renewed. “Substrate” landed because it implies something foundational that other things grow on top of. A substrate isn’t the conversation itself; it’s what makes conversation possible. When two people can reference the same film, the same plot twist, the same villain’s logic, they’re drawing on substrate that lets them talk across difference. The research kept showing me that every industry is optimising the layer above (the individual experience) while nobody is measuring the layer below. That gap between what gets optimised and what gets eroded is where the whole inquiry lives.
Closing Reflection
Nobody built a metric for shared cultural substrate. Engagement gets measured. Retention gets measured. Whether a society still possesses enough common reference points to hold a conversation across difference? Nobody optimises for that, because nobody figured out how to put it on a dashboard. Every individual personalisation is rational. Every adaptation makes the encounter marginally better for the person receiving it. The loss is cumulative and invisible, and I’m not sure we’ll notice until someone tries to make a cultural reference and realises there’s no one left who gets it.


