In a previous piece, I argued that AI is structurally breaking the innovation mechanism that has sustained economic growth for two centuries. The Aghion-Howitt balance between protection and competition is collapsing: execution is so cheap that any innovation gets replicated before the innovator can capture the returns. In the piece before that, I argued that the economic value of common white-collar work is concentrating in a small number of AI labs, and that companies in niche sectors need to build and own their own AI technology to keep value distributed.
Both of those pieces left something unfinished. They described what’s happening and why, but they didn’t address the practical question: if you’re running a company right now, what do you actually do? Which work should be handled by which kind of AI? Where are you retaining value and where are you leaking it?
To answer that, you need to understand what white-collar work actually is. Not what we tell ourselves it is. What it structurally is, and how it got that way.
The great reallocation
In the mid-1950s, white-collar workers outnumbered blue-collar workers in the United States for the first time. By 1960, blue-collar workers made up only 37 percent of the workforce. By 1970, 31 percent. Today, roughly 27 percent. The transition happened in a single generation, and it was so complete that most people alive today have never known anything else.
The usual story is that this was a shift from physical work to knowledge work. From making things to thinking about things. But that framing obscures what actually changed.
Resource allocation was never absent from the economy. Ford’s assembly line was a resource allocation insight. Carnegie’s vertical integration of steel production was a resource allocation insight. The managerial revolution that Alfred Chandler documented in The Visible Hand was already well underway by the early twentieth century. The pre-war economy was not a world of pure physical execution with no optimisation. It was a world where execution dominated the labour mix and allocation, while present and important, was a relatively thin layer on top.
What changed in the post-war decades was the ratio. The economy progressively shifted its centre of gravity from execution toward allocation. As markets grew more complex, supply chains more global, and the potential configurations of resources more numerous, the returns to optimising where things went increasingly outstripped the returns to simply doing more of the doing. The white-collar workforce didn’t appear overnight. It grew, decade by decade, as organisations discovered that each additional layer of cognitive overhead, each additional manager or analyst or coordinator, could unlock more value through better allocation than the equivalent headcount could produce through direct execution.
This is what the white-collar economy actually is. Not “knowledge work” in some abstract sense. It is a decentralised optimisation stack. Its purpose is to take resource allocation insights and compute them down through layers of organisational structure until they reach the physical world as concrete instructions: build this, ship that, hire here, invest there.
Even things that look purely digital are ultimately serving this function. A SaaS product coordinates physical supply chains. An advertising platform optimises the allocation of marketing spend toward physical products. A financial services company allocates capital across physical enterprises. The economy bottoms out in atoms, and the white-collar stack is the machinery that decides which atoms go where.
The overhead nobody talks about
Here’s the part that matters for what comes next.
Within the white-collar stack, very few people are actually doing the thing the stack exists to do. Very few people are generating genuine resource allocation insights: spotting under-allocations, identifying new configurations of resources, having the thesis. The vast majority are doing something else entirely. They are computing those insights into implementable form.
Consider what happens when someone at a company has a genuine strategic insight. Let’s say the insight is “accelerated computing is the future.” That’s a resource allocation thesis. Implementing it requires thousands of people doing coordination, project management, procurement, compliance, documentation, hiring, budgeting, reporting. All of that is cognitive work. It requires education, expertise, judgement. But it is execution, not insight. It is the overhead of the optimisation machine.
The numbers bear this out. As of 2014, the United States had 18.5 million managers and supervisors and 5.3 million administrative support workers: a bureaucratic class of 23.8 million people, one manager or administrator for every 4.7 workers. Gary Hamel and Michele Zanini at London Business School estimated that excess bureaucracy costs the US economy more than three trillion dollars annually, roughly 17 percent of GDP. Despite decades of predictions that organisations would flatten and management layers would shrink, bureaucracy has been growing, not shrinking. The cognitive overhead of the optimisation stack has been expanding relentlessly.
This isn’t because organisations are stupid. It’s because computing a resource allocation insight down through an organisation to the point where it changes physical reality is genuinely hard. It requires coordination across functions, translation between levels of abstraction, compliance with regulatory requirements, and continuous refinement as the insight meets reality. Each layer of management exists because someone, at some point, decided that the complexity of implementation required another layer of cognitive processing between the insight and the atoms.
The white-collar economy didn’t replace execution with thinking. It replaced physical execution with a new kind of cognitive execution, and that cognitive execution brought its own enormous overhead.
The goldilocks zone
This structure, the insight layer sitting on top of a thick compute layer, made disproportionate value creation dramatically more accessible. A single resource allocation insight could have consequences worth orders of magnitude more than the time spent having it. That had always been true in principle: Rockefeller’s insight about horizontal integration, Carnegie’s about vertical integration, these were ideas that created enormous value. But the white-collar stack industrialised the process. It made it routine rather than exceptional.
The data tells this story clearly. In 1982, 60 percent of the Forbes 400 richest Americans came from wealthy families. By 2011, that had dropped to 32 percent. Self-made founders increasingly dominated the list. The industries producing extreme wealth shifted from physical resources, oil, real estate, manufacturing, to idea-based enterprises: technology, finance, retail innovation. The CEO-to-worker pay ratio, roughly 20:1 in the 1950s, rose to 42:1 by 1980, then 120:1 by 2000, and over 200:1 by 2013. The economy was increasingly rewarding the insight layer relative to the execution layer.
The founder building a billion-dollar company from nothing became commonplace in a way it never had been before. The white-collar stack provided the machinery to convert ideas into disproportionate physical outcomes at scale. And critically, the cognitive execution overhead that the stack required was simultaneously the thing that made the founder model work.
This connects directly to what I described in my previous piece about the Aghion-Howitt balance. The overhead was protective. When you had a genuine resource allocation insight, turning it into reality took years and cost millions. You needed to hire people, build systems, onboard customers, navigate regulations. That was expensive and slow. But the slowness was the moat. While you were computing your insight into reality, competitors couldn’t replicate you, because replication required the same years and the same millions. The cognitive execution overhead gave founders time to become load-bearing, to embed themselves in their customers’ operations so deeply that by the time competitors arrived, switching was more expensive than staying.
This was the Aghion-Howitt goldilocks zone. Protection was real but temporary: your execution overhead bought you time, but eventually competitors would build their own stack and catch up. Competition was genuine but not instantly lethal: someone could see your insight, but they couldn’t replicate your implementation overnight. The white-collar era didn’t just enable disproportionate value creation. It created the precise conditions, a thick protective layer of cognitive overhead, under which the innovation cycle could function.
The world wasn’t always one in which a person could move up through insight. And it is about to not be that way again.
AI completes the transition
AI is not introducing a new economic paradigm. It is completing the one the white-collar revolution started.
The white-collar revolution said: allocation is more valuable than execution. But it couldn’t fully separate the two. You still needed enormous cognitive overhead to convert allocation insights into physical outcomes. The insight and the execution were bundled together inside organisations, inseparable in practice even if conceptually distinct.
AI finishes that separation. It strips out the cognitive execution layer. What’s left is the insight layer on one end, the physical layer on the other, and AI as the connective tissue between them.
And this changes everything about how value works. The cognitive overhead that used to be protective is gone. A resource allocation insight that once took years and thousands of people to implement can now be computed into reality in days or weeks. Which means the window between “I had this insight” and “someone replicated it” collapses. Protection evaporates. We move out of the goldilocks zone and toward the far right of the Aghion-Howitt curve: too much competition, too little protection, too little incentive to invest in deliberate innovation.
This is the founder problem I described in my previous piece, viewed from a different angle. The solo founder can have a genuine insight. They can even execute on it, instantly, because AI handles the compute. But they can’t become load-bearing because there’s no execution overhead to slow down replicators. The thick middle layer of cognitive execution that used to buy founders time has been removed. The insight is real. The exploitation window isn’t.
But established companies are in a different position. They are already load-bearing. They already have the baseline of economic relationships that persist independently of any single insight. They don’t need execution overhead to be protective. They need it to be efficient. And this is where the architectural choice becomes critical.
Two layers, two strategies
If white-collar work is a stack with an insight layer and a compute layer, and AI is replacing the compute layer, then the question for any company is: what kind of AI replaces each layer, and what does that choice mean for where the value goes?
The compute layer is the cognitive execution overhead. Process management, compliance, coordination, documentation, reporting, the machinery that converts insights into implementable instructions. This work is repetitive, domain-specific, and needs to be cheap, scalable, and reliable.
The insight layer is resource allocation discovery. Spotting under-allocations, identifying new configurations, having the thesis. This work is exploratory, unpredictable, and benefits from breadth.
Most companies, if they think about this at all, will use frontier AI models for everything. One API, one vendor, one integration. It’s the easiest path. It’s also the worst one.
When you use a frontier model to automate an existing process, you are replacing a human compute cog with a token-based one. The process was already known. You’re not discovering anything. You’re just doing the same thing cheaper. And because the frontier model is a commodity, every company has access to the same one, the cost savings get competed away almost immediately. You’ve saved money briefly, your competitors save the same money, prices drop, and the only lasting beneficiary is the lab that sold you the tokens. This is pure value leakage.
The compute layer should be handled by specialised systems. Smaller models trained on your specific domain, your data, your processes. They’re cheaper to run. They’re often more accurate on the specific task because they’re optimised for exactly that problem. And critically, you own them. The value stays inside the company rather than flowing to Mountain View or San Francisco. This is the argument I made in my first piece about domain-specific AI ownership, and it applies with full force here: the compute layer is where specialisation protects you.
The insight layer is different. Here, you actually want breadth. You want the model to be able to see patterns across domains, to surface connections that a narrow system would miss, to stumble onto resource allocation improvements that nobody thought to look for. This is the stochastic discovery mechanism from my second piece. The AI, working within your specific operational context, occasionally finds something new: a more efficient allocation, an untried configuration, a gap nobody noticed.
And here’s the reassuring part. Even when you’re using off-the-shelf frontier models for the insight layer, paying the labs for every token, the value of the discoveries still accrues to you. Because the discovery happened in your context, on your data, surfacing gaps specific to your operations. The lab provided the intelligence. You captured the insight. The discovery might be short-lived. Competitors might stumble onto the same thing within days or weeks. But you got the margin for that window, and the next discovery is already arriving. As long as the rate of discovery exceeds the rate of decay, you sustain profit.
That’s the main conclusion, and it’s actionable: specialise the compute layer, use frontier models for the insight layer, and understand that even commodity AI at the insight layer creates value that accrues to your company rather than the lab.
The optional edge
There’s an additional move available, though it’s not required to make the basic framework work.
If you use unmodified frontier models for your insight layer, your discovery distribution is roughly identical to every other company making the same API calls. You all find the same things at roughly the same time. The valuable tail of rare, company-specific insights is thin because you’re all looking through the same lens.
But if you go further, adapting near-frontier open-source models with custom reinforcement learning loops, fine-tuning on your operational history, building reward signals from your domain-specific outcomes, you shift your discovery distribution. You find rarer things, more often. Things that a competitor using vanilla API calls wouldn’t surface because they require your specific context to even recognise as significant. As I argued in my previous piece, in a fat-tailed distribution, even a small shift produces a disproportionate increase in the probability of finding something in the valuable tail.
This isn’t the core argument. You can operate effectively with off-the-shelf frontier models at the insight layer and specialised systems at the compute layer. But if you want to be systematically luckier, if you want to find things before your competitors find them, adapting the insight layer is how you get there.
Where this leaves people
I’ve been careful in this piece to talk about which kind of AI handles each layer of the stack. I haven’t said much about where humans fit. That’s because the honest answer is uncomfortable.
AI does the compute layer. That much is already clear. But AI also does the insight layer. The entire stochastic discovery model from my previous piece is about AI stumbling onto resource allocation improvements, not humans having clever ideas while AI handles the paperwork. The insight layer might take slightly longer to automate than the compute layer. In some domains, maybe not. But the direction is the same for both. In the limit, AI does the entire stack. The compute cogs go. The insight work goes. Maybe on different timelines, but toward the same destination.
Which means the question was never “which layer do humans get to keep?” The question is: who owns each layer after humans are out of both?
The economy splits into two ownership pools. The lab owners capture value from providing the intelligence, the raw cognitive infrastructure that powers both layers. The enterprise owners capture value from the context in which that intelligence operates: the baseline, the load-bearing relationships, the stochastic discoveries that accrue to their specific operations. The architectural choice I described above, specialised systems for compute, frontier models for insight, determines how much value leaks from the enterprise pool to the lab pool. That matters. Getting it right keeps more value within the enterprise.
But here’s the thing that the architectural recommendation, on its own, doesn’t address. Even if every company gets the architecture perfectly right, owns its compute layer, runs an effective insight layer, sustains profit through continuous discovery, the proceeds still flow to whoever owns the enterprise. Under current structures, that’s a small group of shareholders. The 23.8 million managers and administrators, the cognitive compute cogs who make up the bulk of the white-collar economy, they’re gone. They don’t capture any of it. There’s no rung of the stack left to contribute to. No layer where human effort translates into differential economic outcome.
Ownership is all that’s left.
This is what makes the cooperative ownership model I discussed in my previous piece not just a nice idea but the central question. If contribution no longer exists as an economic concept, if there’s no way to earn your way into the value chain through effort or skill, then how ownership is distributed becomes the entire ballgame. Cooperative structures that distribute enterprise ownership broadly aren’t a soft social democratic preference. They’re the only structural response to a world where the stack has been fully automated and the only remaining question is whose name is on the deed.
And as I argued before, cooperative ownership doesn’t make a company less competitive. The AI is the same. The discovery rate is the same. The specialised systems are the same. Only the distribution of proceeds changes. That’s what makes it viable: it changes the ownership layer without touching the competitiveness layer.
The thread
This is the third piece in a series, and I want to use the ending to draw the thread that runs through all three.
In the first piece, I argued that the economic value of common white-collar work is concentrating in a small number of AI labs, that this concentration has serious implications for consumer demand and economic inequality, and that companies in niche sectors have a time-limited window to build and own their own AI technology before the frontier labs are economically pushed into those same domains.
In the second, I argued that AI is structurally breaking the Aghion-Howitt innovation mechanism that has sustained economic growth for two centuries. Execution is so cheap that any innovation gets replicated before the innovator can capture the returns. But a new form of innovation is emerging, stochastic discovery, that doesn’t depend on protection because it doesn’t depend on investment. Companies can sustain profit through a continuous stream of small, fast-decaying advantages, provided the rate of discovery exceeds the rate of decay. The solo founder can’t build a billion-dollar company because the exploitation window no longer lasts long enough to become load-bearing. But established companies, already embedded in their customers’ operations, can survive and even thrive.
In this piece, I’ve tried to show what white-collar work actually is: a decentralised optimisation stack built over seventy years, consisting of a thin insight layer and a thick cognitive execution overhead. AI strips out the overhead. In the near term, the architectural choice matters: specialised systems for the compute layer, frontier models for the insight layer, and an understanding that using commodity AI for commodity work is the maximally value-leaking configuration. But in the limit, AI handles both layers, and the only question that remains is who owns what.
The thread across all three pieces is about protection and what happens when it disappears. Domain specialisation protects niche companies from the frontier labs, for now. Cognitive execution overhead protected founders from instant replication, until AI removed it. Load-bearing relationships protect established companies from the decay of any single discovery. But none of these protect the tens of millions of people whose economic participation depended on being somewhere in the middle of the stack, doing the cognitive execution work that converted insights into reality.
In a world where every layer is automated, contribution ceases to be an economic concept. The only mechanism left for distributing value is ownership. Who owns the AI infrastructure. Who owns the enterprises that run on it. That’s the entire question, and cooperative structures that distribute ownership broadly without sacrificing competitiveness are the most structurally sound answer I’ve encountered.
I don’t know if the discovery rate can keep pace with the decay rate. I don’t know how fast both layers of the stack get automated. I don’t know whether governments will engage with the ownership question before the window closes. But I do know that the white-collar economy, the thing we’ve spent seventy years building, the thing that employs the majority of workers in every developed nation, is a machine whose middle layer is about to be removed. And the conversation about what replaces it, not the technology but the economic architecture, is not happening at anything like the level it needs to.
