On February 3rd 2026, Anthropic announced a legal plugin for Claude Cowork. Within hours, Thomson Reuters dropped 18%. RELX, the parent company of LexisNexis, fell 14%. Wolters Kluwer shed 13% in Amsterdam. Across the software and professional services sectors, roughly $285 billion in market capitalisation evaporated in a single day.
A month earlier, the broader SaaS sector had already entered freefall. The IGV software ETF was down over 20% from its highs. Atlassian lost 35%. Salesforce dropped 28%. The term "SaaSpocalypse" started appearing in analyst notes, not as hyperbole, but as a genuine attempt to name what was happening: a structural repricing of whether software companies can survive when AI agents can replicate their core functionality at marginal cost.
Goldman Sachs analysts compared the trajectory of software stocks to that of newspapers: an industry where technological disruption caused a prolonged multi-year decline, and share prices only stabilised once earnings had already collapsed. The parallel is uncomfortable but instructive.
These are not speculative scenarios. They are things that have already happened. And they are the opening chapters of a much larger economic story that, I think, very few people are thinking through to its conclusion.
The ubiquity spectrum
To understand where this goes, it helps to have a mental model. Picture a spectrum. On one end you have the most ubiquitous white-collar tasks in the economy: software engineering, legal work, accounting, data analysis, administrative support. On the other end you have highly uncommon work: a peptide chemist optimising a proprietary synthesis process, an ecological surveyor quantifying carbon emissions from vegetation decline on a specific land parcel.
The ubiquitous tasks are every bit as technical as the uncommon ones, sometimes more difficult. Legal work requires deep expertise. Software engineering is hugely complex. But these are the tasks the frontier AI labs are going after first, and the reason has nothing to do with difficulty. It’s that these professions are everywhere. Every company in the world needs legal support. Every company needs software. Every company needs accounts. The sheer ubiquity of these tasks across the global economy makes them the obvious targets for frontier AI labs. The addressable market is enormous, which justifies enormous investment in training data, benchmarks, scaffolding, tooling, and domain-specific fine-tuning.
So the frontier labs go after them. Not because they’re simple, but because the economic return on making AI good at them is massive.
Now draw a line on that spectrum. Everything to the left of the line represents the domains the labs have targeted and where AI is commercially competitive with human labour. Everything to the right, the labs haven’t yet found it worth their while to invest. The line doesn’t move because AI gets generically smarter. It moves because of economics. When the labs have captured most of the value in the common domains, when competitive pressure has driven margins down and there’s diminishing return on further investment in legal or accounting or software engineering, they start looking rightward. The next most ubiquitous domain becomes the next target. The line moves when saturation on the left makes the economics of the right more attractive.
That line hasn’t moved far yet. The labs are still deep in the process of capturing the common domains. But within those domains, the pace of capture is extraordinary.
Who captures the value?
Before AI, the value of white-collar work was distributed across the economy. Companies employed lawyers, engineers, analysts, administrators. Those employees earned salaries, paid rent, bought groceries, funded pensions. The economic value of their intelligence was captured locally, by individuals and by the businesses that employed them.
AI changes the distribution. When a model can do the work of a legal associate, or a data analyst, or a software engineer, at least some portion of the economic value that used to go to that person now goes to the model provider. Not all of it, not overnight, but the direction of travel is clear. Tens of percent of what was previously distributed human income becomes revenue for a small number of AI companies.
And the depth of that capture is accelerating. METR, an AI evaluation nonprofit, has been tracking what they call the "task-completion time horizon" of frontier models: the duration of task, measured by how long it takes a human expert, that an AI agent can complete with 50% reliability. That metric has been doubling roughly every seven months since 2019. More recent data suggests the pace may have accelerated, with doubling times closer to four months through 2024-2025. In early 2025, frontier models could reliably handle tasks that took a human expert about an hour. By late 2025, that was several hours. If the trend continues, we’re looking at models that can handle day-long and then week-long autonomous work within the next few years. Within the domains the labs have targeted, AI is not just nibbling at the edges. It’s eating through the entire stack of work, from simple tasks to complex multi-hour deliverables, at an exponential rate.
And then competitive pressure kicks in. If Anthropic can offer legal reasoning at a given price, Google will undercut them. Then OpenAI. Then whoever comes next. The competitive dynamics between labs don’t benefit the workers who’ve been displaced; they drive the price of AI-delivered intelligence down toward marginal cost. A task that used to justify a lawyer billing £1,000 an hour might be delivered by an AI for £10 an hour, not because it’s worth less, but because the marginal cost of compute is radically lower than the marginal cost of a human, and there’s enough competitive pressure to force the price down.
There’s a genuine question about what this does to the total size of these markets. On one hand, you might expect that if AI drives the unit cost of legal analysis from £1,000 an hour to £10 an hour, the total market for legal services collapses. The same work gets done for a fraction of the price, and the aggregate value shrinks dramatically.
On the other hand, there’s the Jevons paradox. William Stanley Jevons observed in 1865 that as steam engines became more efficient, coal consumption didn’t fall; it rose. Efficiency made coal cheaper per unit of work, which made it economic to use coal in vastly more applications. The same dynamic could play out with AI-delivered intelligence. When legal analysis costs £10 an hour, every small business that never had a lawyer suddenly has access to one. Every contract that went unreviewed gets reviewed. Every compliance question that went unasked gets asked. Usage could explode so dramatically that the total market stays large or even grows, despite the unit price collapsing.
I’m genuinely unsure which force wins. These are counteracting pressures, and the outcome probably varies by domain. In some markets, the Jevons effect might dominate and total spend holds up. In others, the commodity pricing might win and the market simply shrinks. But regardless of which side the total market lands on, the distribution of that value has shifted. The income that used to flow to millions of human workers now concentrates in a handful of companies and their shareholders. Whether the pie stays the same size or shrinks, the number of people eating it has changed dramatically.
The concentration problem
This isn’t just an inequality argument, though it is that. It’s a structural economic argument.
White-collar workers are also consumers. When they earn salaries, they spend that money on goods and services. They pay rent, buy food, fund childcare, take holidays. Their spending creates demand for other businesses, which in turn employ more people, who spend more money, and so on. This is the basic demand-side engine of a consumer economy.
If a very large proportion of white-collar income is redirected from millions of distributed workers to a small number of technology companies, the demand side of the economy erodes. The AI companies themselves don’t consume at the same rate or in the same way. Their shareholders might, to some extent, but wealth concentration famously reduces the velocity of money. You end up with a structural demand gap: more output capacity than ever, but fewer consumers with the purchasing power to absorb it.
The obvious counter-argument is that this has been said before. Every major technological revolution has prompted fears of mass unemployment, and every time, labour has reallocated. The industrial revolution displaced handloom weavers but created factory jobs. The automobile displaced farriers but created mechanics, road builders, suburban developers. The common rebuttal is that this time is different, and I think there’s some truth to that, though I want to be careful about how far I push it. What’s being commoditised here isn’t a narrow task or a specific trade. It’s cognition itself, the thing that most of our economy runs on. This isn’t a small piece of the labour market being disrupted while the rest absorbs the shock. It’s the foundational layer. There may well be reallocation toward physical labour and the material economy, and I’ll come to that later, but that reallocation also has a ceiling. Humanoid robotics is probably ten years behind language-based AI, but it will eventually arrive, and when it does, the physical economy won’t be the safe harbour it is today. I don’t think the Luddite Fallacy fully applies when the technology being commoditised is general-purpose cognition, but I’ll acknowledge this is a genuinely open question.
What isn’t an open question is that almost nobody seems to be seriously thinking about it. I sat at dinner about a year ago with a group of investors. These were sharp people, experienced allocators, well-connected. And not a single one of them was thinking about what happens in the limit. They were thinking about which companies would be evergreen through AI disruption: people always need to drink, people always need to eat, certain sectors feel immune. But nobody at that table was asking: what does evergreen even mean if the consumers don’t have any money? What does a consumer economy look like when a significant chunk of consumer income has been absorbed by five or six technology companies?
A year later, the conversation has moved on somewhat. You can see the first-order effects being priced in: this specific company might lose revenue because AI can do what their software does. The SaaS selloff, the legal tech crash, the broader repricing of seat-based subscription models. But the second-order effects, the macroeconomic restructuring, the demand-side implications, these are nowhere near priced in. The software sector ETF being down 35% from its October 2025 highs is a correction within one industry. The question of what happens to aggregate consumer demand when white-collar income across every industry begins to compress is a different order of magnitude entirely. And governments haven’t even started the conversation.
Consider the pace. METR’s time horizon data shows frontier model capabilities doubling every four to seven months. That’s not a trend you can plan for with annual strategy reviews. Between the time a company decides to investigate AI’s impact on their sector and the time they’ve completed their assessment, the capabilities frontier may have doubled again. The Anthropic legal plugin didn’t exist in January 2026. By February, it had wiped billions from the market. That kind of pace is simply not priced into how most organisations, most investors, and certainly most governments operate.
The geopolitical dimension
And it’s not just a question of which companies capture the wealth. It’s a question of which nations. The frontier AI labs are, with few exceptions, American. Anthropic, OpenAI, Google DeepMind, Meta AI: all headquartered in the United States. The compute infrastructure they run on is largely American-owned. The chips are designed by American companies.
Behind the American frontier, there’s a Chinese frontier. DeepSeek, Qwen, and others. Perhaps six to twelve months behind on raw capability, but offering dramatically better price-performance ratios.
I can see this producing a bifurcated world. Western economies, already embedded in American tech ecosystems, will likely pay the premium for the frontier models. They’ll get the best capability, and they’ll pay American companies for it. Meanwhile, much of the developing world, where price sensitivity is far higher and geopolitical alignment less rigid, will gravitate toward the Chinese alternatives. The capability gap may be real, but when your alternative is no AI at all, a model that’s 85% as good at a quarter of the price is an easy choice.
So you end up with a global economy where white-collar value extraction flows into two channels: America and China. For countries like the UK, sitting in neither camp and producing neither the models nor the chips, the value leakage is significant. Billions of pounds of economic activity that used to circulate domestically, paying British lawyers and British software engineers, now flows to San Francisco and, indirectly, to Shenzhen. Even Anthropic’s own CEO has warned that AI could displace half of all entry-level white-collar jobs within one to five years. When the people building the technology are themselves saying this, and the destination for the resulting economic value is overwhelmingly overseas, the sovereignty dimension becomes impossible to ignore.
The long tail
Now here’s where the story gets interesting, and where I think there’s something actionable.
Go back to that ubiquity spectrum. The frontier labs have strong incentives to push their line further and further to the right: from the most common tasks toward less common ones. As the market for common tasks gets commoditised and margins compress, the relative return on capturing the next most common domain increases. If you’ve already commoditised basic legal work and standard software engineering and routine data analysis, where’s the next tranche of value? It’s in the niche verticals. The industry-specific workflows. The domain-particular deliverables.
But there’s a critical race condition here. These niche domains have natural defences that I’ve written about before: proprietary data, hard-won process knowledge, domain-specific scaffolding, regulatory requirements, and the accumulated institutional logic of how companies turn tasks into deliverables. Organisations have spent years, sometimes decades, developing processes that decompose complex work into specific sequences of subtasks. They know what good looks like for their particular outputs. They have proprietary data that the labs have never seen and would have no economic reason to acquire.
A pharmaceutical company’s peptide design workflow involves mass spectrometry analysis, design of experiments, and integration with specific lab instruments and LIMS systems. An ecological consultancy quantifying carbon loss from vegetation decline on degraded land needs site-specific remote sensing models, species-level classification, and knowledge of the particular carbon accounting methodologies accepted by regulators. A quarry operator’s aggregate quality control requires computer vision models trained on site-specific material types and lighting conditions. These are deeply niche deliverables, and the frontier labs are not going to build custom RL loops or curate domain-specific training datasets for each of them. The addressable market for any single vertical is too small to justify the investment right now.
But that changes over time. As the market for common tasks becomes saturated and margins compress, the relative return on capturing the next most common domain increases. The labs will be economically pushed toward the long tail, not because they want to build for every niche, but because the easy money on the common side dries up. They’ll build increasingly powerful general-purpose agent frameworks that can be pointed at niche domains with less and less customisation.
Even so, domain-specific models have a structural advantage that doesn’t go away: they’re smaller, cheaper to run, and often more accurate on their specific task because they’re trained on precisely the right data and can use an architecture optimised for the problem rather than a general-purpose one that has to be good at everything. A well-built domain-specific system can deliver better performance at a fraction of the compute cost of a frontier model being prompted into the same territory. That’s a lasting economic moat. It won’t hold off everything forever, but it’s more of a concrete bunker than a sandcastle.
So there’s a race, but it’s not a race the niche players are destined to lose. On one side: the frontier labs, gradually pushed by economics toward niche domains. On the other side: companies and practitioners within those domains, who have the opportunity to build and own their own AI technology while the advantage is real.
The case for distributed AI ownership
I want to be honest about the limits of what I’m about to suggest. The problem I’ve just described is macroeconomic. What I’m about to propose is not a solution to that problem. It’s one concrete thing I think companies can do, and it addresses a small part of the picture.
If the economic value of common white-collar intelligence is going to concentrate in a small number of labs regardless, and I think it largely will, then the question becomes: what can we do about the long tail?
I think the answer is that companies in niche sectors need to invest now in building their own AI capabilities. Not just adopting off-the-shelf tools, but developing proprietary models, proprietary training data, proprietary evaluation frameworks, things that capture the domain knowledge that currently lives in their people’s heads and embed it in systems they own.
Yes, this drives down the unit price of intelligence within their sector. A construction company that builds its own defect detection model makes that capability cheaper and more accessible. That’s somewhat uncomfortable if you’re the consultant who used to charge for that expertise. But cheaper doesn’t necessarily mean less total value. As we saw with the Jevons dynamic, driving the price down can mean the capability gets used far more widely: every site, every pour, every inspection, not just the ones that justified bringing in an expensive specialist. The total value captured within the sector can hold up or even grow, even as the unit price falls. And critically, it stays within the sector. The alternative is worse. The alternative is that a frontier lab eventually offers a general-purpose model that can do 80% of what you do, at a fraction of the cost, and all the economic value flows to Mountain View instead of staying within your industry.
When companies in niche sectors own their AI technology, the value stays distributed. An environmental monitoring company in Edinburgh owns its models. A legal AI startup in London owns its training data. A materials science company in Manchester owns its process optimisation algorithms. None of them individually commands the market power of a frontier lab, but collectively they represent a much healthier economic distribution than a world where everything flows to five companies in California.
This line of thinking is a significant part of why I started New Gradient. But the principle is bigger than any one company. The more businesses that own their own domain-specific AI, the more distributed the economy remains.
The physical anchor
There’s one more dimension worth considering. For all the disruption that AI is bringing to knowledge work, the physical world remains stubbornly hard to automate. Robotics is advancing, but it’s years behind language-based AI in terms of practical capability. Building a house, operating a crane, repairing a bridge, farming land: these activities require physical presence and manual dexterity that current robots can’t match at anything like the cost-effectiveness of human labour.
This matters because it creates a temporary floor under a significant portion of the economy. Even if white-collar work gets substantially automated, the physical economy persists. And the Jevons paradox applies here too: as AI-driven intelligence becomes cheaper, we’ll use more of it, which means more compute, more data centres, more chips, more energy, more raw materials, more physical infrastructure. Alphabet, Amazon, Meta, and Microsoft have announced combined AI infrastructure spending of roughly $680 billion for 2026, nearly double what they spent the previous year. The value chain ultimately bottoms out in atoms, not bits. ASML’s lithography machines, TSMC’s fabrication plants, the mines that produce rare earth elements: these become the bottlenecks.
There may be a partial rebalancing toward the material economy as a result. Not a return to some industrial golden age, but a recognition that when intelligence is commoditised, the scarce resources shift to physical infrastructure and the supply chains that support it. We’re already seeing hints of this in equity markets: in Israel, chip-sector companies like Tower Semiconductor and Nova have displaced the once-dominant software giants in the top market cap rankings. The market is beginning to favour atoms over bits. For investors, for policymakers, and for anyone thinking about where to build a career, this is worth paying attention to.
What governments might do (and why it’s hard)
It’s tempting to say governments should intervene to protect consumer demand. You could imagine regulations requiring that humans remain in the loop for certain tasks, or that AI must be deployed as tooling for existing workers rather than as a replacement. If you can’t fire the person, and you must deploy AI as an augmentation rather than a substitution, you preserve the income distribution even as productivity rises.
The problem is the prisoner’s dilemma. If the UK mandates human-in-the-loop requirements and the US doesn’t, UK-based services become more expensive. Companies relocate. Investment flows elsewhere. Any country that unilaterally restricts AI deployment to protect its workforce risks making its economy less competitive relative to countries that don’t. And there will always be countries that don’t.
This doesn’t mean government action is futile. It means it needs to be coordinated, thoughtful, and probably more creative than blunt employment protection. Universal basic income, retraining programmes, sovereign AI infrastructure, redistributive taxation of AI-derived profits: these are all options, and all of them have serious challenges. I don’t pretend to have the policy answer. But I do think the conversation needs to be happening at a much higher level of urgency than it currently is.
The honest conclusion
I want to be straightforward about what I think I know and what I don’t.
I’m fairly confident that AI will commoditise a very large proportion of common white-collar work within the next few years. The evidence is already visible in stock markets, in capability benchmarks, and in the pace of model improvement. I’m fairly confident that the economic value of that work will concentrate in a small number of technology companies, and that this concentration will have serious implications for consumer demand and economic inequality.
I’m moderately confident that building and owning domain-specific AI technology within niche sectors is one of the best available strategies for distributing economic value more broadly, and that there’s a time-limited window to do it before the frontier labs are economically pushed into those same domains.
I’m less confident about the right policy responses. The cross-border competitive dynamics make this genuinely hard, and I don’t have a clear answer.
What I am sure of is that the conversation isn’t happening at the level it needs to. Not in government. Not among investors. Not in most boardrooms. If intelligence becomes truly commodified, the basis on which our economy has evolved, where human cognitive labour is the primary source of value for most people, changes fundamentally. This is not a sector rotation or a cyclical adjustment. It’s a structural transformation of the economy, and the pace at which it’s arriving gives us very little runway to prepare. The fact that we haven’t seen monumental shifts in the stock market, just sectoral tremors, means this isn’t priced in. The fact that no major government has published a serious policy framework for AI-driven labour displacement means it isn’t being planned for. And the fact that the technology is improving on an exponential curve means the gap between preparation and reality is widening, not narrowing.
The few things we can do concretely, like investing in domain-specific AI ownership, building proprietary technology, keeping value distributed across industries and geographies, these are worth doing now. Not because they solve the whole problem, but because they’re the parts of the problem we can actually act on while the larger conversation catches up.
The money is moving. The question is whether we’re going to let it all flow to the same place, or whether we’re going to fight to keep some of it where it belongs.
