The Complaint Department Is Human
On training your replacement while hoping you're the exception
The performance review reads: “Not maximizing AI augmentation.”
Sarah’s discovering something about herself she didn’t want to know: she’s been hired to be a human error message. When the machine fails, she appears. When it succeeds, she disappears. The metrics don’t lie... 90 seconds for the AI, 12 minutes for her. What the metrics don’t say is that she only exists in those 12 minutes. The rest of the time, she’s waiting to find out if she’s still necessary.
This is what “essential worker” means now: a human buffer between the algorithm and the lawsuit.
Her manager suggests she “leverage the tools more effectively.” Sarah watches Marcus, who’s figured it out. He passes borderline cases back to the AI. The system circles them through until the customer gives up or gets someone else who actually tries to solve things and gets flagged for low productivity.
Marcus has discovered something interesting about himself too: he’s willing to optimize his metrics by making others look worse. He doesn’t feel bad. The system rewards it. What does it mean that he’s not wrong?
Here’s what’s strange: we’re measuring ourselves against machines and we’re okay with it.
Not grudgingly okay. Actually okay. Sarah doesn’t think “this is insane.” She thinks “how do I improve my numbers?” The human-versus-algorithm frame isn’t oppressive to her. It’s just work. She’s internalized that her value is whether she can do what the machine can’t, faster than the machine needs to learn how.
When did that happen? When did we agree that the baseline of human worth is algorithmic speed?
You probably didn’t notice it happening to you. That’s the cognitive trick. It comes in metrics. “Response time.” “Resolution rate.” “Efficiency score.” These sound neutral. They sound like improvement. They’re actually teaching you to experience your humanity as latency.
A radiologist told me she catches herself getting anxious when reading an MRI and hasn’t found what the AI flagged yet. Not anxious about the patient. Anxious that she’s slow. Anxious that her eyes aren’t confirming what the machine saw quickly enough. Fifteen years of experience, and she’s internalized that the algorithm’s judgment is the default and her job is to verify it fast.
“I know that’s backwards,” she said. “But the metrics track how long I take after the AI flags something. Every minute I spend actually looking, really looking with my training, that’s a minute I’m slow.”
She’s not being gaslit by the system. She’s willingly participating in her own deskilling because the system made the metrics visible and she can’t unfeel them.
We’ve invented language that lets us watch people become economically obsolete in real-time while calling it “transformation.” The perfect word because it implies the person continues, just in a new form. But that’s not what’s happening when Sarah is told to “upskill” for a job that doesn’t exist yet while the AI trains on every case she solves. That’s not transformation. That’s extraction with a timeline.
“Workforce optimization” means some of the workforce is the thing being optimized away.
“Human-in-the-loop” means the human is a loop component, not a decision-maker. Interchangeable. Upgradeable.
“Augmentation” means you’re useful plus the machine. Soon it’ll mean the machine is useful plus occasional you. Then it’ll mean the machine, and you’re the exception handler, and everyone knows exception handlers get deprecated.
The tell is in the job descriptions. They’ve started saying things like “AI-assisted legal research” for paralegal positions. That means the AI does the research and you format the output. Or “machine learning operations support” for what used to be data analysis. That means you’re supporting the machine’s operations, not the other way around.
We’re hiring people to be assistants to the automation that’s eliminating their profession. And they’re taking the jobs. And they’re grateful for them. What does that reveal about what we think we deserve?
Content moderators are participating in something philosophically weird and nobody’s talking about it.
Their job is to review what the AI flagged. Every decision they make trains the next model. They’re teaching the system to replace them. They know this. Explicitly. It’s not hidden. The job description says “training data generation.”
They are being paid to make themselves obsolete, and they’re doing it, because rent is due and jobs are scarce and this one pays $18 an hour.
You could call that exploitation. But what do you call the fact that they’ve normalized it? That they don’t experience it as dystopian, just late capitalism? That they’ve incorporated their own obsolescence into their career planning: “This is good for two years, then the AI will be good enough, so I need to save money and learn something else.”
They’ve internalized the assumption that their economic existence has an expiration date and their responsibility is to manage the transition. The system didn’t force that on them. They arrived at it themselves by being rational actors in an irrational structure.
This is new. We’ve had technological displacement before. We haven’t had people calmly explaining how they’re scheduling their own redundancy while continuing to perform the work that causes it.
Here’s what’s coming next, something that sounds like satire but isn’t.
Actually, scratch that. It is satire. It’s just satire that people are building pitch decks for.
Human Oversight as a Service.
Picture the slide deck. Clean sans-serif. Lots of white space. A graphic showing a human head icon plugged into a flowchart between “AI Decision Engine” and “Regulatory Compliance Gate.”
The value proposition writes itself: “Transform regulatory burden into competitive advantage with our Human Compliance Layer™. Scale human review to match algorithmic speed. Pay per decision, not per employee. Eliminate HR complexity while meeting all meaningful human oversight requirements.”
There’s probably a testimonial. “Before HumanOversight.ai, our compliance team spent 47 hours reviewing 10,000 loan decisions. Now we process the same volume in 30 hours of review time, purchased on-demand, with full audit trail. We’ve reduced compliance cost by 83% while maintaining regulatory standards.”
What that means in practice: regulatory frameworks are starting to require “meaningful human review” of algorithmic decisions in hiring, lending, insurance, parole. The companies deploying these systems don’t want meaningful human review. They want legal compliance. So they’re structuring the human review to be technically present but functionally meaningless.
One company’s internal metrics showed human reviewers spending an average of 11 seconds per decision. They were reviewing loan denials that the algorithm had analyzed for 37 milliseconds. The human review wasn’t checking the algorithm’s work. It was creating legal cover.
Here’s the service model: you’ll be able to rent humans by the second for regulatory compliance.
Need 10,000 loan decisions reviewed by a human for fair lending compliance? That’s 30 hours of human review time at 11 seconds per decision. Minimum wage in most states. Call it $400 total. You’ve just made 10,000 algorithmic decisions legally defensible for less than the cost of one hour of lawyer time.
The humans doing the review will know they’re not really reviewing. They’ll know they’re regulatory decorations. They’ll do it anyway because it’s $15 an hour and that’s $15 more than not working.
The slide deck has moved from satire to plausibility to—check the Y Combinator application portal—probably already in someone’s Series A pipeline. I’m not extrapolating. This is already happening in medical billing, content moderation, and insurance claims. It’s just not packaged as a service yet. Give it eighteen months.
We’re about to create an entire employment category of people whose job is to be technically human for legal purposes. Not to exercise judgment. Not to use expertise. To exist as proof that a human was present.
What does it mean culturally that we can look at “humans as a regulatory compliance service” and think “yeah, that tracks”? That we’ve normalized the idea enough that the absurdity doesn’t even register? We’ve arrived at a moment where you can pitch “rent-a-human for legal purposes” to venture capitalists and they’ll want to see your unit economics, not question your premise.
Most people reading this work somewhere in the optimization pipeline.
If you’re reading this, you’re likely someone who works with AI, thinks about AI, makes decisions about AI deployment. You’re probably not Sarah. You’re probably someone who could, theoretically, decide that Sarah’s metrics are insane.
The question isn’t whether you have that power. The question is what you do with it.
When you see efficiency metrics that compare human response time to algorithmic response time on fundamentally different tasks, what happens? Some people flag that as methodologically bankrupt. Others accept it because efficiency is the value being optimized for and the dashboard needs numbers.
When your company announces “AI-first customer service” and you know that means Sarah’s job becomes worse before it becomes eliminated, different people make different choices. Some object. Some workshop the internal comms about “transformation” and “empowering employees with AI tools.”
The weird thing is that many people feel bad about it while doing it anyway. There’s a story people tell themselves about inevitability. The market demands efficiency. The shareholders expect growth. The competition is automating. If we don’t do this, someone else will. These are all true statements that also function as a permission structure for participating in something while experiencing yourself as helpless.
Every time someone approves metrics that measure humans against machines. Every time someone accepts “upskilling” as the solution to “your job is being automated.” Every time someone participates in the language games that hide what’s happening. Someone is making a choice that feels like not-a-choice.
Not because they’re malicious. Because they’ve normalized it. Because it’s their job. Because resisting would be complicated and probably futile and definitely bad for their career.
The discovery people make about themselves, when they stop to examine it: when the choice is between personal comfort and someone else’s livelihood, comfort wins and gets called pragmatism.
What’s happening to human expertise when its primary function is waiting around to catch algorithmic errors?
The answer is weirder than “it atrophies.”
It metastasizes into a new kind of skill: algorithmic anticipation. Sarah is getting good at predicting what the AI will escalate. Not good at customer service. She’s good at modeling the AI’s failure modes. That’s her expertise now.
The radiologist isn’t developing her diagnostic eye anymore. She’s developing her AI-verification speed. That’s the skill the metrics reward. In five years, she’ll be worse at reading imaging than she is now, and better at confirming what the machine flagged. She’s deskilling in her official profession while developing expertise in machine collaboration.
Which would be fine if “machine collaboration” was a stable career. It’s not. It’s a transitional state between “human does the work” and “machine does the work.” You’re being paid to occupy the middle, and the middle is collapsing.
Here’s the cognitive distortion: people in this situation keep investing in the collaboration skills. Learning the AI’s interface, optimizing their workflow around the machine’s outputs, getting faster at verification. This feels like professional development. It’s actually hospice care for their profession.
They know this. Sort of. The way you know something you can’t afford to fully believe. A paralegal told me she’s “building expertise in AI-assisted legal research” while also quietly studying to take the LSAT because “this job has maybe five years.” She’s simultaneously investing in the thing that’s eliminating her profession and planning her escape from it.
That psychological split, doing the work that obsoletes you while knowing it obsoletes you, that’s new. We don’t have cultural scripts for that. We have scripts for being fired, for industries dying, for recessions. We don’t have scripts for “I am competently performing the work that makes me redundant and my performance reviews are good.”
The hourglass economy isn’t a metaphor. It’s a plan.
McKinsey has a deck (you’ve probably seen some version of it) showing the workforce bifurcating into high-skill technical roles and low-skill service roles with the middle automating away. They present this as an observation. It’s a roadmap. Companies are implementing it.
High-skill technical work: AI development, machine learning engineering, data science, systems architecture. These jobs pay $150K-$500K. They’re concentrated in five cities. They require advanced degrees and credentials that cost $100K-$300K to acquire. There are maybe 500,000 of these jobs in the US, optimistically a million if you count adjacent roles.
Low-skill service work: delivery, cleaning, food service, warehouse, home care. These jobs pay $25K-$40K. They’re everywhere. They require physical presence and dexterity that’s still cheaper than robotics. There are 40 million of these jobs. For now.
The middle: everything else. Manufacturing, administration, customer service, paralegal work, medical coding, financial analysis, routine programming, graphic design. 60 million jobs. Automating at different speeds, but the trajectory is clear.
The plan is that the 60 million people in the middle either upskill into the million high-end jobs or downshift into the 40 million low-end jobs. The math doesn’t work. It’s not supposed to. The math working would mean the transition is manageable. The transition is not meant to be manageable for the people being transitioned.
What’s strange is that we talk about this like it’s a skills problem. “The jobs are changing faster than people can retrain.” That’s not the problem. The problem is we’re eliminating 60 million jobs and creating 1 million different jobs and pretending these numbers are compatible if people just try harder.
The factory worker in Ohio can theoretically retrain as a machine learning engineer. Theoretically. In practice, they’re 47 years old with a high school diploma and a mortgage. The ML jobs want a master’s degree in computer science and five years of Python experience. The jobs are in San Francisco where rent is $3,400/month and the factory worker’s house in Ohio is worth $120,000.
“Labor mobility” is the economics term. The human term is “we’ve made your entire life non-transferable.”
Here’s what happens when someone makes certain choices, three years from now. This should feel uncomfortable in how plausible it is:
“AI-native workflow” becomes standard at a company. Someone decides that entry-level positions don’t make financial sense anymore because the AI does what entry-level workers used to do. The career ladder is missing its bottom rungs.
Two years after that, the company complains they can’t find qualified candidates. What they mean is they can’t find people with five years of experience in jobs that no longer exist because they automated the positions that used to create that experience.
Someone solves this by creating a new job category: “AI training associate.” Pay is $40,000/year. Job description is to review AI outputs and flag errors. The flagged errors train the next model. It’s entry-level content moderator work repackaged for college graduates who can’t find the jobs their degrees were supposed to qualify them for.
These workers understand they’re teaching the system to not need them. They take the jobs anyway. They’re grateful for them. They put “AI collaboration” on their resumes and hope it transfers to something more stable.
Someone has to keep choosing this. Every quarter, someone decides the headcount budget. Someone approves the automation roadmap. Someone signs off on the job descriptions that repackage temporary work as career development. Someone runs the meeting where “AI training associate” gets positioned as an entry point rather than a dead end.
It’s not inevitable. It’s chosen. Repeatedly. By people responding to incentives that make each individual choice rational while the aggregate result is insane.
Someone’s going to write a case study in the next few years about a company that achieved 40% labor cost reduction through “human-AI collaboration.”
The case study will not mention that “collaboration” meant the humans were measured against the AI’s speed on tasks the AI couldn’t do. It will not mention that the humans experienced this as a continuous performance review against an opponent that got better every week while they stayed the same.
It will not mention that the workers developed stress disorders from the cognitive dissonance of being told they were essential while watching their headcount shrink. It will not mention the three workers who had breakdowns. It will not mention that the “voluntary attrition” figure of 60% meant people were leaving because the work became psychologically untenable.
It will say “successful digital transformation” and “improved operational efficiency” and “maintained service quality with reduced workforce.” These are not lies. They’re just radically incomplete.
The CEO will do an interview about “responsible AI adoption” and “investing in our people.” The investment was severance packages. The responsibility was staying barely on the legal side of mass layoffs.
People will read the case study. Someone will present it in a meeting. They’ll focus on the efficiency gains. Most won’t think about the 400 people who used to work there and don’t anymore and are competing for jobs in an economy that’s running the same playbook everywhere.
Why won’t they think about those people? Because they’ve learned not to. Because they’ve accepted that some people are strategic priorities and others are cost centers. Because they’ve normalized the idea that human worth is economic output and economic output is defined by whoever owns the productivity tools.
The discovery people make about themselves when they examine it: they’re willing to participate in a system that treats humans as temporary inputs as long as they’re on the management side. This doesn’t get experienced as cruelty. It gets experienced as professional competence.
That’s the thing that should make people uncomfortable. Not that the system is brutal. That people become fluent in its brutality without noticing the transition.
The darkest part isn’t the automation. It’s the accountability diffusion.
Amazon built a hiring algorithm that discriminated against women. When this was discovered, they turned it off. No one was punished. No policy changed. The incident became a case study in “algorithmic bias” that everyone cites as evidence they’re taking the issue seriously.
Here’s what actually happened: The algorithm learned from historical hiring data. The historical hiring data reflected years of biased human decisions. The algorithm optimized for the pattern. The engineers built what the data showed. The managers deployed what the engineers built. The executives approved what the managers recommended.
Every person in that chain could point to someone else. The engineers: “We built what the data showed.” The managers: “We deployed what engineering recommended.” The executives: “We trusted our technical teams.” The system discriminated, but no human chose to discriminate, so who’s responsible?
This is the laundering function of algorithms. You can implement decisions that would be illegal or socially unacceptable if a human made them explicitly. The algorithm makes them implicitly. The illegality or unacceptability gets wrapped in technical language (”the model optimized for historical patterns”) and suddenly it’s a bias problem, not a discrimination problem. Problems get solved. Discrimination gets punished. Bias gets “addressed” with “fairness interventions” that maybe work and maybe don’t but definitely create the appearance of caring.
We’re about to see this everywhere. Lending decisions, insurance pricing, hiring, college admissions, parole recommendations, child protective services. Every domain where we used to require human judgment, we’re inserting algorithms. Not because algorithms are more fair (they’re not, they learn from unfair historical data) but because algorithms are legally safer.
When a loan officer denies your mortgage, you can sue for discrimination. When an algorithm denies your mortgage, you’re suing a math problem. The math problem was trained on data. The data reflected reality. Reality is discriminatory. The algorithm is just honest about reality in a way humans learned not to be.
This is how we’re going to encode existing inequalities into infrastructure and call it objectivity.
Here’s what really happens when we say “the algorithm decided”:
The algorithm executes a decision rule that humans wrote based on objectives humans defined using data humans collected that reflects choices humans made.
At every step, there are human choices. What to optimize for. What to measure. What counts as success. What trade-offs to accept. These choices encode values. Efficiency over fairness. Profit over stability. Measurability over complexity.
We treat these choices as discoveries. “The data shows...” No. The data shows what you chose to collect. “The model optimized...” For what you chose to optimize for. “The algorithm found...” The patterns you chose to make findable through feature selection and training data.
You can generate the appearance of inevitability by hiding the choices in technical process. This is useful if you want to make political decisions look technical. “We’re not choosing to deny loans to this neighborhood. The risk model shows higher default rates.” The risk model is reflecting historical redlining. You’re choosing to optimize for a pattern that embeds historical discrimination. But you’ve made it look like discovery.
The people deploying these systems know this. They’re not naive. They’re making trade-offs. Speed over accuracy. Scale over nuance. Consistency over context. These are reasonable trade-offs if you’re optimizing for profit. They’re devastating if you’re the person on the wrong side of them.
What’s weird is that we’ve stopped pretending otherwise. Ten years ago, there would be corporate messaging about “balancing efficiency and humanity.” Now it’s just efficiency. The humanity is implied to be your problem to maintain on your own time.
Sarah got her performance improvement plan yesterday.
It says she needs to reduce her average handling time by 40% or she’ll be “transitioned to a role more aligned with her skill set.” There is no role more aligned with her skill set. Her skill set is solving complex customer service problems. The AI is learning to solve complex customer service problems. The skill set is becoming obsolete in real-time.
She knows this. The weird thing is she’s going to try anyway. She’s going to figure out how to handle upset customers in 7 minutes instead of 12. She’s going to optimize herself against the machine. She’s going to participate in the metrics system that’s making her redundant.
Why? Because she needs the job. Because she believes, sort of, that if she’s good enough, fast enough, valuable enough, they’ll keep her. Because the alternative is admitting that her effort doesn’t matter, and that’s too psychologically destabilizing to accept while you still need to get up and do the work.
This is what we’ve done to people. We’ve put them in a system where their rational individual choice is to collaborate in their own obsolescence while hoping they’re the exception.
Marcus is going to keep gaming the system to make his metrics look good. The others are going to keep actually solving problems and getting flagged for low productivity. Sarah is going to try to get faster. The AI is going to keep learning from all of them.
What happens next depends on choices people keep making. Someone decides the headcount targets. Someone approves the automation roadmap. Someone sets the timeline for each phase of “workforce optimization.”
When they run those meetings, someone could say “these metrics are measuring humans against machines on incompatible tasks.” Someone could say “we’re creating psychological conditions that are destroying people.” Someone could say “the timeline assumes these workers are disposable and we should examine that assumption.”
Most people in those meetings don’t say those things. They discuss implementation timelines and change management strategies. They workshop the language for the announcement. They focus on making the transition smooth for the company, not the people being transitioned.
This isn’t because they’re monsters. It’s because they’ve learned to see workforce reduction as a technical problem with a project plan, not a moral problem with human consequences. They’ve been trained to optimize for the measurable (cost reduction) and externalize the unmeasurable (psychological destruction, community impact, the downstream effects of making hundreds of people economically precarious).
Anthropologically, this is fascinating. We’re watching a culture normalize the scheduled obsolescence of its own members. Not as a crisis. Not as a moral catastrophe. As a quarterly initiative with a project charter and success metrics. Future historians are going to study this period trying to understand what it meant that we could see this clearly and continue anyway. That we could watch people carefully documenting their own redundancy and think “they should upskill” instead of “we should stop.”
Here’s the thing about hourglasses: both ends are temporary.
The people building the systems that automate other people’s jobs think they’re safe because they’re technical, educated, well-paid. They’re useful.
Until they’re not.
GitHub Copilot is writing the boilerplate code they used to write. GPT-4 is doing the analysis they used to do. The management consulting firms are building AI tools that do what their analysts used to do. The law firms are automating legal research. The hospitals are automating diagnostic reads.
Those jobs are next. Maybe not this year. Maybe not in five years. But the same logic that’s eliminating Sarah’s job applies there too. If it can be measured, it can be optimized. If it can be optimized, it can be automated. They’re just earlier in the timeline.
And when it’s their turn, they’ll discover what they built. They’ll discover that “upskilling” means competing for fewer positions that require more credentials. They’ll discover that “transformation” means they’re expensive and the AI is cheap. They’ll discover that all the language used to make other people’s obsolescence sound like opportunity applies to them too.
They’ll look for sympathy and find metrics. They’ll look for options and find advice about learning to code, which would be funny if it weren’t so bleak, since many already know how to code and the AI is getting better at it.
They’ll understand, finally, what they participated in building. A system that treats human worth as economic output. That optimizes for efficiency over humanity. That makes obsolescence sound like transformation and treats people as temporary inputs.
They’ll understand it the way Sarah understands it now. Too late to stop it. Just in time to experience it.
The hourglass they built is draining. They were never at the top.
They were always in the middle... just like Sarah, just like Marcus, just like the radiologist, just like all of us measuring ourselves against machines we helped create.











Okay, now how do we stop it?
I hate to say this, but I spent most of the time I was reading this article wanting to vomit. If I ruled the world, and it would have to be the whole world, I’d shut down all A1.
I despise dealing with AI when they answer my phone call. I had a bad experience with my bank recently. I ended up crying and begging. When I finally reached a human being, I said “you must feel very proud of the fact that you made a senior citizen cry.” It really was for the best that she didn’t reply. I will never call that bank again. Instead, I will go to the closest branch and waste their time in person.
If my doctor spends his day studying algorithms, I don’t want to know. One more comment about the bank, the robot had ZERO empathy. I could have slit my wrists and she would have continued to ask me what department I was trying to reach.
My first job out of college was in customer service. I hate that human beings are being manipulated and mistreated by other human beings.
Let’s just call “economically precarious” what it really is. UNEMPLOYED.
One last thing. Is it autofill? If anyone is going to edit my work, it’s going to be me. I am tired of proofreading something and discovering that what I’ve written has been changed.
Rant over.