The User Who Never Existed
How 'Normal' Interface Design Excluded Everyone
We built thirty years of digital infrastructure for a ghost. Now that AI makes accommodation profitable, we’re discovering our default was fiction.
It’s hard to click an invisible button. It’s harder when you can’t see the screen. Harder still when your hands don’t move the way designers assumed, when you’re Deaf and the interface speaks only in sound. For thirty years, we built digital infrastructure around a user who doesn’t exist: permanently 25, with perfect vision and motor control, undistracted, fully abled, equipped with the latest devices. We called this “normal” and told everyone else to adapt or get left behind.
Most of us participated in this. The ones reading this, the ones who build and use digital products without thinking about screen readers. We used interfaces that excluded blind people and called it standard design. We attended meetings without captions until someone asked. We never noticed when wheelchair users couldn’t navigate the same websites we could, because their exclusion was the unmarked default and we had work to do.
What changed in 2025 wasn’t sudden empathy. That would require a different species. AI made accommodation profitable, and suddenly the impossible became merely expensive.
The Ghost User We Designed For
They designed for someone who exists primarily in onboarding presentations and design documents. The platonic user: able-bodied, hearing, sighted, with excellent hand control, working on a recent laptop in good lighting with stable wifi and no children requiring attention. Essentially, designers designed for themselves during their best week at the office, then called it “universal design” and shipped it to everyone.
The absurdity reveals itself in specifics. The “average user” had perfect vision but wore glasses. Glasses didn’t count as accommodation. The default used keyboard shortcuts and ergonomic chairs. These didn’t count as assistive technology. We decided that some deviations from bodily average required retrofit accommodation while others just were how normal people worked. Glasses: normal. Screen readers: special assistance. Keyboard shortcuts: efficiency. Voice control: disability tech. The line we drew had nothing to do with how many people needed the feature and everything to do with who we imagined when we said “user.”
The Royal Society’s 2025 report on disability tech identifies the problem: designers excluded disabled users from the development process, treated them as edge cases, and built products that didn’t work for millions of people. But this undersells the weirdness. We didn’t just exclude disabled users. We built everything for a fictional composite human, then acted surprised when real humans struggled. All of whom deviate from average in various directions.
If your “normal user” doesn’t actually exist, everyone’s an edge case. We just chose which edges to accommodate and which to ignore.
When the Margins Reveal the Center’s Flaws
Four developments in mid-2025 show what happens when accommodation becomes technically feasible and economically attractive. Not coincidentally, all four were “built for disabled users.” All four improve experiences for everyone.
Screen readers and image descriptions: Screen Reader AI launched in July 2025 using GPT-4o to create interactive scene graphs of web pages. Blind and low-vision users can ask what color a button is, what happens when clicked. The system describes dynamic content, tracks changing interfaces, and handles single-page apps that traditional screen readers struggled with.
That we needed GPT-4o to retrofit thirty years of web design reveals something darker: not oversight, but indifference. We didn’t accidentally build millions of websites that screen readers couldn’t parse. We built them knowing that blind users would struggle. Or not caring. The technology to add alt-text existed the entire time. What we lacked was the priority. Now that large language models can automate the work we couldn’t be bothered to do manually, we call it innovation.
Voice navigation: For people who cannot use keyboard and mouse, voice control means independence. WebNav launched in March 2025 with sophisticated voice-controlled web navigation offering sub-second response and conversational interfaces.
Voice input was technically viable for years. What changed was treating it as a primary interface mode rather than a compromise for people who “needed help.” Once accommodation reached quality parity with standard input, everyone benefited. The same feature that was “assistive tech” when disabled people needed it became “natural interface evolution” when abled users wanted it.
Real-time captions: Android’s “Live Caption” now adds accurate, speaker-identified captions to any audio or video automatically, including live calls. AI transcription in 2025 offers strong accuracy across languages.
We spent thirty years treating audio-only content as standard and captions as accommodation. This wasn’t technical limitation. Closed captioning existed since the 1970s. It was a choice about which bodies counted as “normal” and which required “special help.” That captions improve comprehension in noisy environments, help non-native speakers, and benefit anyone multitasking reveals the choice was always arbitrary.
Most people reading this consumed years of audio-only content without once wondering whether Deaf colleagues could access the same information. We didn’t actively exclude them. We just built a world where their exclusion was unremarkable, then went about our day.
Platform-level infrastructure: Europe’s European Accessibility Act took effect in June 2025, requiring captions, subtitles, image descriptions, and accessible controls. Companies responded with core platform features, not add-ons. Apple’s iOS 26 promises improved VoiceOver and Braille support. Testing tools now emphasize screen reader compatibility.
That legal mandate was required after thirty years tells you everything about who gets to be “normal” by default. When accessibility was optional, companies opted out. When regulation made it mandatory, they discovered the budget and technical capability they’d claimed didn’t exist.
The technology didn’t change. The consequences of exclusion changed.
The Cost of Accommodation vs. The Cost of Design Arrogance
Follow the money and you find the pattern. Companies that called alt-text “too expensive” found budget once machine learning could automate their moral obligations. Platforms that claimed voice control was “technically complex” shipped it once hands-free interaction became a market differentiator. Services that treated captions as compliance checkboxes built them into core infrastructure once the European Accessibility Act made non-compliance costly.
Disability access didn’t become technically possible in 2025. It became economically attractive.
When disabled users needed features, companies framed them as charitable costs. When AI made them profitable, companies reframed them as innovation. When regulation made exclusion expensive, they reframed accessibility as competitive advantage.
The Royal Society report identifies structural barriers in language so clinical it could be describing laboratory specimens: incomplete data on disability, low investment in accessibility, exclusion of disabled users from design decisions. The clinical language obscures the mechanism. “Incomplete data” translates to “we didn’t ask because the answer might require work.” “Low investment” means “we spent money on features abled users wanted.” “Exclusion from design” means “we built products for people like us and told everyone else to deal with it.”
Power shaped product. Designers and product managers, predominantly abled, built for users they imagined, who looked remarkably like themselves. When disabled users complained that products didn’t work, companies nodded sympathetically and did nothing. When economics shifted and AI made accommodation cheaper, companies called it progress.
What We Chose Not to See
The accessibility cascade follows a familiar pattern. What begins as a life raft for the drowning becomes a yacht everyone wants to board. Features designed for disabled users improve experiences for everyone.
Better image descriptions help search accuracy and content discovery. Real-time captions support language learners, enable multitasking, and work in sound-sensitive environments. Voice navigation allows hands-free work. Screen reader compatibility produces clearer markup and better keyboard navigation.
If designing for disabled users improves products for everyone, we treated accessibility as charity rather than quality for three reasons. None flattering.
One: We genuinely believed disabled users were rare edge cases. This despite disabled people comprising more than ten percent of the population. It suggests either spectacular ignorance or willful blindness.
Two: We knew disabled users existed but decided they didn’t matter enough to prioritize. This is more honest but reveals design as hierarchy enforcement. Building for bodies we valued, ignoring those we didn’t.
Three: We assumed disabled users would adapt to abled design. The same way everyone makes small accommodations for imperfect interfaces. This is perhaps most revealing. It suggests we saw disability as individual deficit requiring individual adjustment rather than design failure requiring design correction.
Most likely all three operated simultaneously, reinforcing each other. We didn’t see disabled users, didn’t prioritize them when we did see them, and assumed they’d adapt rather than demanding we build differently.
From disabled users themselves, the testimony is blunt. One blind screen reader user wrote in July 2025 that retail sites and banking have improved, but mobile apps and travel booking remain difficult. About half the apps on his phone don’t work at all. He’s not describing edge cases. He’s describing standard apps from major companies that simply don’t function for millions of users.
We called this “normal” for thirty years. That reveals what we chose not to see.
The Retrofit Economy
By August 2025, something shifted. Not dramatically. Most mobile apps still don’t work well with screen readers. But the direction changed. AI-powered accessibility tools are moving from specialized technology to mainstream features.
The Research Institute for Disabled Consumers found that 53 percent of people using digital assistive technology couldn’t live the way they do without it (RIDC, 2025). These aren’t conveniences. They’re necessities that companies took thirty years to build adequately.
This isn’t altruism. It’s the retrofit economy. The recognition that we built wrong from the start and now must rebuild. AI makes retrofit cheaper than it used to be, which is why it’s happening now rather than in 2010 when the same features were technically possible but economically “infeasible.”
The accessible web we’re slowly creating isn’t new design. It’s correction of design arrogance from the 1990s and 2000s, when we built digital infrastructure around imagined users and called the result universal.
The cascade metaphor is apt but incomplete. Calling it a “cascade” suggests natural flow rather than what it is: belated recognition that our defaults were fictional and our “normal” was exclusionary design we never bothered to question.
Designing for the edge from the start would have produced better products. Instead, we designed for ghosts, then spent decades retrofitting for humans.
What This Reveals About Normal
The 2025 accessibility improvements expose something uncomfortable about how we construct “normal.” We treat some bodily variations as requiring accommodation. Screen readers for blindness. Others we treat as just how people work. Glasses for vision correction. The line isn’t about prevalence or technical complexity. It’s about which variations we encountered in ourselves and which we encountered in “others.”
Glasses are normal because enough powerful people wear them. Voice control is assistive tech because it was built for people who “can’t” use standard input. The categorization has nothing to do with the technology and everything to do with who needed it first.
Real-time captions help in noisy environments, revealing audio-only content was always suboptimal. Voice navigation enables hands-free work, revealing mouse-and-keyboard was always limiting. Better image descriptions improve search, revealing we accepted inadequate metadata for decades. What looks like accessibility features helping abled users is design that works for more types of bodies revealing how limited “normal” design always was.
The disabled users who advocated for these features for thirty years weren’t asking for special treatment. They were identifying design failures and requesting correction. That we categorized their requests as “accessibility needs” rather than “design feedback” tells you who we listened to and who we didn’t.
What Still Doesn’t Work
Many mobile apps don’t work with screen readers. Most new platforms, including gaming, VR/AR, and smart home devices, repeat the same mistakes. Building for imagined default users rather than actual humans.
The 2025 improvements are mostly retrofit to existing systems, not fundamental redesign. We’re still building for ghost users first, accommodating real humans second. The Royal Society report calls for disabled people to be central to digital tech development from the start, not consulted after decisions are made. That this remains rare reveals who we trust to define “user needs.” It’s still mostly abled designers building for people like themselves.
We built thirty years of digital infrastructure for a ghost in the machine, excluded people with real bodies, and called the result “normal.” Now that AI makes accommodation cheaper, we’re reconsidering what “default” should mean. This isn’t technological progress making lives better. It’s the story of what we accepted and what it took to make us question it.
The accessibility improvements of 2025 benefit everyone because “normal” design was always broken. We just didn’t notice because it was broken in ways that didn’t affect us. Or we didn’t care.
The disabled users who needed these features aren’t celebrating the cascade. They’re exhausted from thirty years of explaining that design that doesn’t work for disabled people reveals flaws in design itself, not flaws in disabled bodies.
What took us so long was never technical capability. It was deciding that some bodies matter enough to design for and others can make do with whatever we built for people who looked like us.
The retrofit economy is expensive. Building for actual humans from the start would have been cheaper. We chose expensive because we never thought we’d have to pay for it.
Now we’re paying for it. That’s not progress. That’s the bill coming due.
Research Notes: The Accessibility Cascade
Starting Point: A Regulation That Pointed Somewhere Bigger








