Build With Heart and Balance
A Field Guide to Humane Termination
Every Atlassian will receive an email within the next twenty minutes.
That sentence dropped in a pre-recorded video from CEO Mike Cannon-Brookes on March 11, 2026, moments before 1,600 people learned they no longer worked at a collaboration software company. Twenty minutes. Enough time to make coffee. Not enough time to find out who you are without the badge. The company kept the collaboration tools. The collaborators became redundant.
After the emails landed, Slack stayed open on mobile devices for six to twelve hours so affected employees could say goodbye. Confluence, where those employees had stored years of institutional knowledge, locked immediately. The company explained this as necessary to protect customer data. The humans got a window. The data got a wall.
This marked the third major AI-justified mass layoff in two weeks. Block had cut 4,000 employees on February 26. WiseTech Global in Sydney announced 2,000 roles the same week. By early March, global tech layoffs had surpassed 45,000, with AI as the most frequently cited justification. Each announcement followed the same arc. Each stock price rose.
This isn’t about Atlassian. It’s about the birth of a new literary genre.
The AI layoff memo has become corporate theology. Read enough of them and the liturgical structure reveals itself: acknowledgement of pain, assertion of strength, invocation of AI as an inevitable force, reframing of adaptation as courageous choice, and benediction of severance. The congregation is not the employees. It’s the market.
Every culture that practises ritual sacrifice develops a priesthood to explain why the sacrifice is necessary and a liturgy to make it beautiful. Ours just happens to file its liturgy with the SEC.
We are not reading corporate communications. We are reading scripture for a religion that worships the appearance of transformation.
Cannon-Brookes wrote that Atlassian’s approach “is not ‘AI replaces people.’” This sentence appeared in a document whose operational content described eliminating 900 software R&D roles because AI changes “the mix of skills we need and the number of roles required.”
The negation does the affirming.
The company value invoked in the announcement? “Build with heart and balance.” The press release didn’t clarify whether the heart came before or after the access revocation, or if balance referred to the severance package or the post-layoff org chart.
Jack Dorsey’s letter to Block shareholders is the genre’s ur-text. “Intelligence tools have changed what it means to build and run a company,” he wrote, before explaining that a significantly smaller team, using the tools Block itself built, could do more and do it better. He framed the elimination of 40% of his workforce as moral courage, claiming he would rather get there on his own terms than respond reactively. Block’s internal AI tool is called Goose. The naming convention is either accidental irony or a precise summary of the new relationship: what lays the golden eggs is no longer the human workforce. The survivors get a productivity tool. The displaced get a story about inevitability.
Then Dorsey made a prediction that functioned as an instruction: “Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes.” Two weeks later, Atlassian did exactly that. WiseTech’s CEO announced that “the era of manually writing code was over.” The echoes are not merely structural. Cannon-Brookes’s language about needing “a different mix of skills” mirrors Dorsey’s framing of a “significantly smaller team” that “could do more,” repeating the same euphemism of subtraction dressed as optimization. The memos echo each other’s logic. The genre self-replicates.
These documents aren’t euphemisms in the traditional sense. A euphemism conceals. These memos operate differently. They construct a reality in which firing people constitutes care, in which reducing your workforce by half qualifies as honesty, in which the company value most relevant to mass termination is balance. They don’t hide what’s happening. They redescribe it until it becomes a conversion narrative. The company is being born again in AI.
Consider the timeline, which speaks for itself.
In October 2025, Cannon-Brookes appeared on the 20VC podcast. He discussed Atlassian’s approach to AI and talent. Technology creation, he said, was not output-bound but talent-bound. The company planned to hire more engineers, more graduates. He spoke about bringing in new cohorts who would approach software development with a fundamentally different understanding of AI-native tools.
Five months later, the company eliminated more than 900 R&D roles. The company that planned to hire more engineers fired the engineers it already had.
This isn’t hypocrisy. Hypocrisy requires a private truth that contradicts a public lie. This is something cleaner: the system has learned to metabolize contradiction. What kind of institutions does this build over time? Ones in which a promise functions not as a commitment but as a position statement, valid for the audience and incentive structure present at the moment of utterance and expired the instant either shifts. Most professionals have met this institution. Few have a name for it. Recruitment in such a culture becomes an exercise in mutual fiction: the company sincerely offers a future it may sincerely revoke, and the candidate accepts knowing the sincerity is real but the future is not. The word “promise” doesn’t die in these institutions. It simply stops referring to the future.
Between October and March, Atlassian’s stock had tumbled into what a Jefferies trader christened the SAASpocalypse, a word that deserves a moment of appreciation for capturing, in a single portmanteau, both the financial panic and the faintly ridiculous self-importance of the industry that coined it. The panic erased roughly two trillion dollars in software market capitalization. Atlassian’s share price had fallen more than 80% from its 2021 pandemic peak. The company had run unprofitable on a GAAP basis since 2017. The incentive architecture changed, and the narrative pivoted to match.
A CEO who sincerely promises more engineers in October and fires 900 R&D staff in March is not contradicting himself. He is responding to two different audiences with two different algorithms for determining what counts as the right thing to do. The podcast audience rewards vision. The market rewards cuts. Both responses are, within their respective systems, entirely rational.
The market’s response to these layoffs functions as a mechanical confession.
Block’s stock surged 24% in after-hours trading on the day it announced 4,000 job cuts. Atlassian ticked up. Investors greeted every subsequent AI-justified layoff as good news. The feedback loop operates with the indifference of a thermostat: markets reward AI-framed layoffs, so executives frame layoffs as AI-driven, so markets reward further, so the narrative becomes self-fulfilling. At some point the question stops being “is AI actually displacing these workers?” and becomes “does it matter, if everyone behaves as though it is?”
This represents a phase transition, but not the one the memos describe. It is not a transition from human labour to AI labour. It is a transition in how financial markets categorize human employees. Workers are no longer assets to develop. They are liabilities to shed. And the market will pay you, quite literally, for shedding them. Block’s stock had dropped 75% over five years. Dorsey fired nearly half his company. The stock recovered a quarter of its losses in a single afternoon. The lesson is not subtle.
Yet here is where the loop reveals its seam. Somewhere in the approval chain for every one of these announcements, a human reviewed the language. Someone at Atlassian read “build with heart and balance” in the context of a termination memo and did not flag the dissonance. Someone at Block approved the name Goose for an AI tool that would justify eliminating 4,000 jobs. These are choice points buried inside an incentive architecture that rewards the smooth functioning of the narrative over the friction of truth. The system sustains itself not through active malice but through the accumulated momentum of people following the logic of the last quarter into the logic of the next.
Then there is the evidence, which refuses to cooperate with the narrative.
Anthropic published a labour market study on March 5, 2026, introducing a measure called “observed exposure.” Rather than asking what AI could theoretically automate, the researchers measured what AI actually did in practice. The gap proved vast. AI operated far below its theoretical capability. No systematic increase in unemployment had surfaced among highly AI-exposed workers since late 2022. Forty-five thousand people lost their jobs in the time it took the research community to publish a study confirming that AI hadn’t yet learned to do them.
What the study did find carried subtler and more troubling implications: suggestive evidence that hiring of younger workers had slowed in AI-exposed occupations. Not mass displacement. A quiet narrowing of entry points. Nobody pulled the ladder up. They just stopped replacing the rungs, one at a time, until the distance to the first foothold was taller than the people trying to reach it. And for the workers already displaced, the senior product managers and staff engineers whose roles were declared obsolete in March, the market that called them redundant has not stopped posting jobs. It has simply started posting different ones, with “AI-native” in the requirements and three fewer years of expected experience, as if the problem with the old workforce was not cost but memory.
The counter-evidence compounds. Wharton’s Ethan Mollick observed that given how new effective AI tools are, it strained credulity to imagine a firm-wide sudden 50% efficiency gain that justifies halving your workforce. Oxford Economics found in January 2026 that many layoffs CEOs attributed to AI actually corrected pandemic-era overhiring, the industry’s least flattering confession, delivered in the passive voice. Josh Bersin, studying more than seventy companies, found that most who deployed AI as a productivity tool saw minimal job reduction; the firms that did see meaningful transformation had re-engineered entire workflows from the ground up, redesigning how teams collaborate, how decisions get routed, how output gets measured. Handing individuals a chatbot and calling it an AI strategy produced almost nothing. The distinction matters because the memos overwhelmingly describe handing out chatbots while claiming the results of ground-up transformation.
None of this means AI capability isn’t genuinely advancing. It is, in ways that will reshape work profoundly over the next decade. But the memos aren’t describing that gradual transformation. They are performing urgency that the evidence doesn’t yet support, because the performance itself generates the stock price movement that justifies the decision retroactively.
Companies are not firing people because AI has replaced them. They are firing people preemptively, citing AI as the reason because markets reward the citation.
Return to the twenty-minute email.
Resource ID: CASE-4471. Classification: Product Management, Senior. Status: Closed.
Call her Case. She built a workflow engine that three million teams use daily. Three months ago, she noticed the AI roadmap didn’t include roles for the people building the product. Her manager said it was too early to worry.
Case gets the email on a Tuesday. Her calendar empties. Slack stays open for goodbyes. Her access to the documentation she created has already vanished. The data got a wall; the human got a window.
We are not watching AI displace workers. We are watching an industry decide to believe AI displaces workers, then making it true by fiat. The narrative is the mechanism. The memo is the technology. The market is the congregation. The twenty-minute email is the sacrament.
Somewhere in the system, a collaboration tool tracks the project status of Case’s workflow engine. The status updated automatically. It reads: on track. The system works perfectly.









