An AI CEO's Blog Post Hit 80 Million Views. Then Wall Street Panicked.

DS
LDS Team
Let's Data Science
17 minAudio · 1 listens
Listen Along
0:00 / 0:00
AI voice
Matt Shumer called this moment "February 2020 for AI." Citrini Research published a fictional dispatch from a 2028 economic collapse. Together, the two essays fueled a \$2 trillion software rout and forced Citadel, Morgan Stanley, and every major AI CEO to pick sides.

On February 9, 2026, Matt Shumer -- the co-founder and CEO of OthersideAI, makers of the HyperWrite autocomplete tool -- published a 5,000-word blog post on his personal website titled "Something Big Is Happening."

He compared the current AI moment to February 2020 -- the month before COVID lockdowns, when most Americans sensed something was coming but hadn't yet grasped the scale. He wrote that he was "no longer needed for the actual technical work" of his job. AI, he claimed, could now "write tens of thousands of lines of code, test applications independently, and iterate without human guidance."

The post hit more than 80 million views on X. Fortune republished it two days later. CNBC, Bloomberg, and CNN ran segments. Within two weeks, a second essay -- this one from a financial research firm -- would push the panic further, and approximately $2 trillion in software market value would evaporate. By the end of February, Citadel Securities would publish a formal rebuttal, Jack Dorsey would lay off 40% of his company, and London would host its largest anti-AI protest in history.

February 2026 was the month AI anxiety went mainstream.

For context: This article builds on our earlier coverage of the AI productivity paradox and the growing disconnect between AI hype and measured output.

The Post That Ignited the Fire

Shumer's background gave his warning an unusual weight. He wasn't a journalist or academic. He was an AI company CEO with six years in the industry, effectively arguing that his own product category was about to make millions of knowledge workers obsolete.

The essay's core argument was straightforward: frontier AI models like GPT-5.2 and Claude Opus 4.6 had crossed a capability threshold that most professionals hadn't noticed. Shumer cited Anthropic CEO Dario Amodei's prediction -- made in a May 2025 Axios interview -- that 50% of entry-level white-collar jobs could be eliminated within one to five years. His advice: spend one hour daily experimenting with paid AI tools. Build financial resilience. Prepare for disruption.

DetailFact
AuthorMatt Shumer, CEO of OthersideAI (HyperWrite)
PublishedFebruary 9, 2026 (shumer.dev, simultaneously on X)
Length~5,000 words
Views on XMore than 80 million by month's end
Fortune adaptationFebruary 11, 2026

When CNBC asked about the panic his post had caused, Shumer responded that it "wasn't meant to scare people." He told the network he was simply encouraging workers to start seriously using AI tools -- as a resource, not a replacement.

Gary Marcus, the NYU cognitive scientist and longtime AI skeptic, was less generous. He called Shumer's essay "weaponized hype" that "tells people what they want to hear, but stumbles on the facts." Marcus pointed out that Shumer had mischaracterized METR benchmarks -- the criterion is 50% correct, not 100%, and covers only coding tasks. A separate METR study had actually found that coders sometimes experienced productivity losses despite perceiving gains.

"A lot of people may have taken his post seriously, but they shouldn't have."

-- Gary Marcus, NYU Professor of Cognitive Science

But more than 80 million people already had.

The Doomsday Sequel

Thirteen days later, the panic got a second act.

On February 22, Citrini Research -- a financial analysis Substack with over 119,000 subscribers -- published "The 2028 Global Intelligence Crisis." The 7,000-word essay was co-authored by James van Geelen, Citrini's 33-year-old founder (a former Los Angeles paramedic who correctly shorted Silicon Valley Bank before its collapse), and Alap Shah, CIO of Lotus Technology Management (a Harvard economics graduate and former CEO of Sentieo, the financial research platform acquired by AlphaSense).

The format was deliberately provocative: a fictional "macro memo from June 2028" describing an economic collapse triggered not by AI's failure but by its success.

The essay introduced a concept it called "Ghost GDP" -- economic output that appears in national accounts but never circulates through the real economy because machines spend zero on discretionary goods. In the fictional scenario, the S&P 500 plunges 38%, unemployment spikes to 10.2%, and a deflationary spiral engulfs professional services. It named specific companies: Zendesk would default. DoorDash, Uber, Visa, and Mastercard would hemorrhage volume as automated workers spent nothing on rides, deliveries, or transactions. The $13 trillion U.S. mortgage market would buckle under mass white-collar layoffs.

Worth noting: The Citrini essay was explicitly labeled a "thought experiment, not a prediction." The authors framed it as a stress test for investors, not a forecast. The market treated it as a forecast anyway.

The Monday after publication, the Dow Jones fell over 800 points.

The "SaaSpocalypse"

The viral essays landed on already-scorched ground.

The sell-off had actually begun weeks earlier. Anthropic had launched Claude Cowork -- an agentic AI assistant that could plan, execute, and iterate through multi-step workflows -- in mid-January. But it was the February 3 unveiling of Cowork's legal automation plugins, which threatened to automate core functions at firms like Thomson Reuters and LexisNexis, that triggered the panic. Software stocks shed $285 billion in market capitalization in a single trading day. Jeffrey Favuzza, on the Jefferies equity trading desk, coined the term: "SaaSpocalypse."

By mid-February, the damage was staggering. Fortune reported approximately $2 trillion in software market value had evaporated between January 15 and February 14, 2026.

StockDeclineNote
Thomson Reuters-15.83%Biggest single-day drop in company history
LegalZoom-19.68%Direct AI legal tool competitor
RELX (LexisNexis parent)-14%Legal research platform
Wolters Kluwer-13%Legal and compliance SaaS
Atlassian-35%Year-to-date by late February
Salesforce-25 to 28%CRM giant
Adobe-25 to 30%Creative suite
ServiceNow-25 to 30%Enterprise workflow
Palantir-22 to 25%Data analytics

Then came Block.

On February 26, Jack Dorsey announced that Block -- the parent company of Square, Cash App, and Afterpay -- would lay off 40% of its workforce. Nearly 4,000 employees, from a headcount of over 10,000, would be let go. Dorsey's memo was blunt: "Intelligence tools have changed what it means to build and run a company." He predicted: "Within the next year, I believe the majority of companies will reach the same conclusion."

Block's stock rose 14-24% on the news.

What the Evidence Actually Says

The essays were compelling. The data was less cooperative.

Three major studies, all published within weeks of the viral posts, told a strikingly different story.

The Yale Budget Lab (February 2, 2026) tracked occupational mix changes and unemployment length for AI-exposed jobs since ChatGPT's 2022 launch. Its conclusion: "The picture of AI's impact on the labor market that emerges from our data is one that largely reflects stability, not major disruption." Executive Director Martha Gimbel put it plainly: "No matter which way you look at the data, at this exact moment, it just doesn't seem like there's major macroeconomic effects here."

The NBER (February 2026) surveyed 6,000 C-suite executives across the United States, United Kingdom, Germany, and Australia. The headline finding: more than 80% said AI had had no impact on productivity or employment at their business. Only one-third of leaders personally used AI, averaging just 1.5 hours per week. The authors invoked Robert Solow's 1987 "productivity paradox": "You can see the computer age everywhere but in the productivity statistics."

Oxford Economics (January 7, 2026) challenged the AI layoff narrative directly. AI was cited as the reason for 55,000 U.S. job cuts in the first 11 months of 2025 -- 75% of all AI-related cuts since 2023. But this represented only 4.5% of total reported job losses. Layoffs attributed to "market and economic conditions" were four times larger at 245,000. The report suggested companies were using AI attribution to "convey a more positive message to investors" than admitting overhiring.

On February 19, at the India AI Impact Summit, OpenAI CEO Sam Altman appeared to confirm this: "I don't know what the exact percentage is, but there's some AI washing where people are blaming AI for layoffs that they would otherwise do."

The Counterattack

Wall Street's heavyweights did not stay silent.

Citadel Securities, in a report authored by Frank Flight, systematically dismantled the Citrini essay. The firm pointed to Indeed job posting data showing demand for software engineers was up 11% year-over-year in early 2026. It noted that generative AI daily work usage remained "unexpectedly stable," new business formation was expanding rapidly, and productivity shocks historically expand consumption -- lower prices increase purchasing power, they don't destroy it. The report invoked Keynes' famously incorrect 1930 prediction of 15-hour workweeks as precedent for overestimating displacement.

Morgan Stanley published a report arguing AI would "reshape jobs, not destroy them," predicting the emergence of new roles: chief AI officers, AI governance specialists, product manager-engineer hybrids, and AI personalization strategists.

Dan Ives of Wedbush Securities called the doomsday scenario "extremely overblown" and the software sell-off "the most disconnected call that I've ever seen." He recommended buying Microsoft, Palantir, CrowdStrike, Snowflake, and Salesforce amid the carnage.

Parmy Olson of Bloomberg Opinion wrote on February 17: "The AI Panic Ignores Something Important -- the Evidence." AI's impact, she argued, would be "uneven, gradual and impossible to predict -- the boring truth, however unlikely it is to go viral."

Noah Smith, the economics writer behind Noahpinion, dismissed the Citrini essay as "just a scary bedtime story" that lacked an explicit macroeconomic model. He suggested the market reaction was "coordinated panic" by traders rather than a response to new information.

Even Wharton professor Ethan Mollick -- who agreed the essays were "worth reading" and that "most people don't know how good AI has gotten, fast" -- added a critical caveat: "AI is still quite jagged, especially when it comes to work across teams and organizations, which creates bottlenecks, for now."

Vishal Misra, Vice Dean of Computing and AI at Columbia University, published a rebuttal titled "Something Big Is Happening. Here's What It Actually Is." He argued that AI models are "Bayesian inference engines," not sentient entities, and used a historical analogy: when cameras were invented, painters didn't disappear -- they were freed to create impressionism and cubism.

The Timeline

January 26
Dario Amodei publishes "The Adolescence of Technology"
A 20,000-word essay warning of a "country of geniuses in a datacenter." Sets the tone for everything that follows.
February 2
Yale Budget Lab finds no major AI labor disruption
Data shows "stability, not major disruption" in AI-exposed occupations since ChatGPT's launch.
February 3-4
Anthropic unveils Claude Cowork legal plugins; "SaaSpocalypse" begins
\$285 billion wiped from software stocks in a single day. Thomson Reuters drops 15.83%.
February 9
Matt Shumer publishes "Something Big Is Happening"
Compares this moment to February 2020. The post will reach more than 80 million views on X by month's end.
February 11
Fortune republishes Shumer's essay; Zoe Hitzig quits OpenAI in NYT op-ed
Hitzig, a former safety researcher, warns ChatGPT holds "an archive of human candor."
February 17
NBER study: majority of executives report zero AI productivity impact
6,000 C-suite leaders surveyed. Only one-third personally use AI, averaging 1.5 hours per week.
February 19
Sam Altman confirms "AI washing" of layoffs
"There's some AI washing where people are blaming AI for layoffs that they would otherwise do."
February 22
Citrini Research publishes "The 2028 Global Intelligence Crisis"
A 7,000-word fictional dispatch from a 2028 economic collapse. Introduces the concept of "Ghost GDP."
February 23
Nassim Taleb warns of software bankruptcies
"Tail-risk across sectors is structurally underpriced. The risk is not a small correction. It's a large drawdown."
February 26
Citadel Securities publishes rebuttal; Dorsey lays off 40% of Block
4,000 employees cut. Block's stock rises 14-24%. Dorsey: "The majority of companies will reach the same conclusion."
February 28
London hosts largest anti-AI protest in history
500 people march from OpenAI's UK office through Big Tech headquarters in King's Cross.

The Real Human Cost

Nicole James is 42 years old. She held senior content roles at Snapchat, serving as editorial lead for Discover and development lead for Snapchat Shows. She became head of content at Invisible Universe, an entertainment startup, until the company pivoted to become an AI studio in 2023 and laid off half its staff.

James hasn't been employed full-time since, despite more than 15 years of continuous employment before that pivot. She is now working retail.

"I really felt embarrassed when I showed up to work the first day and like put on my name tag," she told Fortune.

James's story is one of many that surfaced during February's panic. But the data on whether AI is actually causing these displacements -- or whether broader market conditions are -- remains fiercely contested. The MIT "Iceberg Index," developed with Oak Ridge National Laboratory in late 2025, estimated that AI can already perform tasks equivalent to 11.7% of the U.S. labor market, representing approximately $1.2 trillion in annual wages across finance, healthcare, and professional services. The study's authors were careful to note: this reflects technical capability and economic feasibility, not a prediction of actual displacement.

Venture capitalist Paul Kedrosky of SK Ventures invoked a more structural warning, drawing a parallel to the "Engels' Pause" -- the 50-year period during the Industrial Revolution when worker pay stagnated despite rising productivity. Bank of America Institute data supported the pattern, showing that productivity gains are increasingly accumulating as corporate profits while labor income steadily falls as a share of GDP. If AI follows the same trajectory, the question isn't whether jobs disappear. It's whether the economic gains reach the people doing them.

Whether February 2026 marked the beginning of a new Engels' Pause or a temporary panic remains the central unanswered question.

The Backlash Goes Physical

On February 28, the discourse left the internet.

Hundreds of people marched through London's King's Cross tech hub -- up to 500 according to organizers -- in what MIT Technology Review described as "the largest AI protest globally to date." Organized by Pause AI and Pull the Plug, the march started at OpenAI's UK office on Pentonville Road and wound through the headquarters of Meta and Google DeepMind. Concerns ranged from "online slop" to "killer robots and human extinction."

The protest came as the broader backlash gathered institutional momentum. iHeartMedia had launched its "Guaranteed Human" campaign at CES 2026, adding the tagline to all stations and pledging no AI-generated personalities and no AI music with synthetic vocalists. The company's internal research found 90% of listeners wanted media created by humans. Merriam-Webster had already named "slop" -- AI-generated low-quality content -- its 2025 Word of the Year.

Worth noting: Jamie Dimon, speaking at JPMorgan's annual investor day on February 24, compared the current moment to the period before the 2008 financial crisis: "The rising tide lifting all boats, everyone was making a lot of money, people leveraging to the hilt." He added: "This time around, it might be software, because of AI." Nassim Taleb, at a Universa Investments event the day before, warned that "tail-risk across sectors is structurally underpriced" and predicted AI would "definitely" lead to bankruptcies in the software sector.

The Bottom Line

February 2026 was the month AI anxiety escaped the tech bubble and became a mainstream economic force.

It started with a 5,000-word blog post from an AI startup CEO who compared this moment to pre-COVID February 2020. It accelerated when a former paramedic turned financial analyst published a fictional dispatch from a 2028 economic collapse that coined the term "Ghost GDP." Between them, the two essays reached tens of millions of readers and became shorthand for a generation's deepest professional fear: that the machines are coming for the good jobs, and no one in charge is doing anything about it.

The market agreed -- briefly. Two trillion dollars in software value evaporated. Thomson Reuters posted its worst day on record. Jack Dorsey laid off 40% of Block and Wall Street rewarded him for it. Five hundred people marched through London demanding someone pull the plug.

But the evidence paints a more complicated picture. Yale found no major labor disruption. More than 80 percent of executives in a 6,000-person NBER survey said AI had no impact on their business. Oxford Economics showed that "AI layoffs" accounted for just 4.5% of total job cuts. Even Sam Altman admitted companies were "AI washing" their layoffs.

The truth, as it usually does, lives somewhere between the viral essays and the Wall Street rebuttals. AI is clearly changing the economics of software and knowledge work. It is not clearly destroying it -- not yet. The data says stability. The essays say catastrophe. The market says both, depending on the day.

What February proved, more than anything, is that the narrative has permanently shifted. The question is no longer whether AI will disrupt white-collar work. It is how fast, how deep, and who pays. And as Nicole James can attest -- putting on a retail name tag after 15 years leading content strategy at Snapchat and beyond -- for some people, the answer has already arrived.

Sources