INSIGHTS

The Innovation Paradox

AI is breaking the mechanism that made sustained economic growth possible. But something new might be growing in its place.

20-MINUTE READ
Listen to article·

Sam Altman has a betting pool. In a group chat with his fellow tech CEOs, they’re wagering on the first year a single person, with no employees, builds a billion-dollar company. “Which would have been unimaginable without AI,” Altman said in an interview with Reddit co-founder Alexis Ohanian. “And now will happen.” Dario Amodei, CEO of Anthropic, has put the odds at 70 to 80 percent that it happens in 2026.

This prediction is meant to be inspiring. It’s the logical apex of the Silicon Valley founder myth: one visionary, armed with AI, builds an empire. No team. No bureaucracy. Just an idea and the tools to execute it.

I want to take this prediction seriously. Not to celebrate it, but to follow its logic to the end and see what it actually implies about the economy we’re heading into. Because I think when you trace the chain of reasoning carefully, the solo unicorn prediction doesn’t just describe a new kind of company. It reveals a fundamental break in the economic machinery that has sustained growth for the past two centuries.

The logic of the solo unicorn

What does a one-person billion-dollar company actually require? It requires that execution, the act of turning an idea into a product or service, has been commoditised to the point where one person can do what previously took hundreds. The idea might be human, but the implementation is almost entirely AI. Design, engineering, marketing, legal, customer support, operations: all handled by AI agents.

Now follow the logic one step further. If execution is that cheap and that accessible for one founder, it’s equally cheap and accessible for everyone else. The same AI tools that let one person build a billion-dollar company are available to every other person on the planet. They’re not proprietary. They’re not scarce. They’re a commodity.

So what happens the day after our solo founder launches their billion-dollar company? Someone else looks at it, understands the idea, and replicates it. Not in six months. Not after raising venture capital and hiring a team. In days. Maybe hours. Because the execution barrier, the thing that used to take years and millions of pounds to overcome, is gone.

And the second replicator undercuts the first on price, because the margins on a solo-founder company with near-zero costs are enormous, which means there’s vast room to undercut. Then a third replicator undercuts the second. And a fourth. Within weeks, the price of whatever the original company was selling has been driven down to commodity pricing. The billion-dollar valuation evaporates.

The very thing that makes the solo unicorn possible, commoditised execution, is the thing that makes it impossible to sustain. The prediction contains its own contradiction.

What this actually reveals

You could dismiss this as a “gotcha” aimed at Silicon Valley optimism. But I think it points to something much more significant than whether one person can or can’t build a unicorn.

In October 2025, the Nobel Prize in Economics was awarded to Philippe Aghion, Peter Howitt, and Joel Mokyr for their work on innovation-driven economic growth. The Aghion-Howitt framework, in particular, formalises something that economists had long intuited: sustained growth depends on a balance between two opposing forces.

On one side: protection. Innovators need to be able to capture the returns from their innovations for long enough to justify the investment. Patents, intellectual property, first-mover advantage, technical moats: these create temporary monopolies that reward risk-taking. Without protection, nobody invests in creating anything new, because someone will copy it before you can make your money back. If you spend five years and ten million pounds developing a new drug, and a competitor can replicate it the week you launch, you never recoup your investment. So you stop investing. Innovation dies.

On the other side: competition. Too much protection and you get entrenched monopolies that have no incentive to innovate. They sit on their rents, block new entrants, and stagnate. If a pharmaceutical company has permanent patent protection and no competitor can ever enter its market, why would it bother developing the next drug? It already captures the entire market. Competition forces the cycle forward: the threat of being displaced by someone better keeps everyone innovating.

Aghion and Howitt showed that the relationship between competition and innovation follows an inverted U. Too little competition: stagnation. Too much competition: no incentive to innovate. The sweet spot in the middle, where protection is real but temporary, and competition is genuine but not instantly lethal, is where sustained growth happens. You can capture monopoly rents for long enough to justify your investment, but not so long that you can afford to stop improving. The past two centuries of unprecedented economic growth have been lived in that sweet spot, maintained by a careful (if often accidental) balance of patent law, competition policy, and market dynamics.

The solo unicorn logic reveals what AI is doing to this balance. When execution is commoditised, the protection side of the equation collapses. Not in one industry. Across the board. Any innovation that depends on execution as its moat, which is most of them, loses its temporary monopoly almost immediately. The window between “I built this” and “someone copied this” shrinks from years to days. The rents that fund the next round of innovation evaporate before they can be captured.

We are being pushed toward the far right of Aghion and Howitt’s inverted U: a world of such intense competition that innovation becomes economically irrational. Why invest in building something new if the return on that investment approaches zero?

This is not a minor adjustment. This is the mechanism that has underpinned economic growth since the industrial revolution, and it is structurally breaking.

But this assumes innovation only works one way

Everything I’ve just described rests on an assumption: that innovation is deliberate. That someone decides to invest time and capital, takes a risk, builds something new, and then needs to capture returns to justify the investment. That’s the model Aghion and Howitt formalised, and it’s the model that’s breaking.

But what if AI changes the mechanism of innovation itself?

In a previous piece, I argued that AI would extract value from companies by replicating what they do, cheaper and faster, with the economic value flowing to the AI providers rather than the companies themselves. I still think that’s partially true. But it’s too static a picture. It assumes companies are fixed entities producing a fixed product, and that AI simply automates that product away. The reality is more interesting.

When a company deploys AI within its operations, something else happens alongside the automation. The AI, working within the company’s specific operational context, its particular data, its particular supply chains, its particular market position, occasionally stumbles onto something new. A more efficient allocation of resources. A configuration of inputs that nobody had thought to try. A process improvement that wasn’t on anyone’s roadmap.

Think about what happened when personal computing arrived. Most of the economy was massively under-allocating resources to computing. Almost nobody realised how valuable it would be to put a computer on every desk. Steve Jobs and Bill Gates saw an enormous gap between how resources were being allocated and how they could be allocated, and they stepped into that gap. The profit they captured was proportional to the size of the gap and the time it took everyone else to see it too.

In the AI era, these gaps will still exist across the economy. Resources will be under-allocated in ways nobody has recognised yet. But instead of requiring a visionary founder to spot them, AI systems deployed within companies will stumble onto them as a byproduct of their daily operations. A particular combination of data, a specific market condition, a conjunction of variables that nobody thought to examine together, and the AI surfaces an insight: here’s a gap, here’s a better allocation, here’s value that wasn’t being captured.

These discoveries aren’t deliberate. Nobody commissioned them. No R&D budget was allocated. They’re a byproduct of having an intelligent system embedded in the operational fabric of the company. They’re stochastic: a particular conjunction of data, timing, context, and model output produces an insight that wasn’t predicted or planned. Just as with human insight, sometimes it takes years for someone to stumble onto a particular discovery. Sometimes it happens the next day. The rarer the conjunction of conditions required to see it, the longer it stays undiscovered, and the more valuable it is when someone finally does.

This is a fundamentally different kind of innovation from the one Aghion and Howitt modelled. The old kind required upfront investment, which required the expectation of capturing returns, which required protection. This new kind requires no upfront investment at all. The AI is already deployed. The discoveries fall out as a side effect. You don’t need to recoup anything because you didn’t spend anything finding it. It just happened.

The old innovation engine is breaking. But a new one is emerging, one that doesn’t depend on protection because it doesn’t depend on investment.

Why this doesn’t save the solo founder

It would be tempting to think this rescues the solo unicorn prediction. If innovation is now stochastic, and resource allocation gaps are being discovered as a byproduct of having AI deployed, then surely a solo founder can participate in that process too? They can. A founder with AI tools will stumble onto genuine insights, real gaps, real value waiting to be captured. The stochastic mechanism works for them just as it works for anyone else.

But the solo founder still can’t build a billion-dollar company from it. To understand why, you need to understand what makes an established company’s baseline actually work.

An established company is load-bearing. Its customers don’t stay because of something the company discovered last Tuesday. They stay because the company is embedded in their operations: contracts, integrations, workflows, habits, switching costs. Revenue arrives tomorrow regardless of whether the company innovates today. This is the baseline. It’s not exciting money. It’s subsistence. But it’s unconditional. It doesn’t depend on the company being smarter or more innovative than anyone else. It depends on the entirely mundane fact that ripping out infrastructure is expensive and annoying.

This baseline is what allows an established company to survive the stochastic innovation cycle. You discover a resource allocation improvement. You exploit it. Competitors catch up in days or weeks and your temporary margin disappears. But you fall back to the baseline. The load-bearing relationships persist. Then your AI surfaces the next discovery, and the cycle repeats. Each individual discovery is small and fast-decaying, but they arrive as a continuous stream on top of an existing foundation. The company doesn’t need any single discovery to be transformational. It needs the rate of discovery to exceed the rate of decay. That’s a viable model.

A founder has no baseline. No load-bearing relationships. No infrastructure that anyone depends on. When they find a resource allocation gap and exploit it, they make some money for days or weeks. But when replication arrives, there is nothing underneath. The discovery was the company. Replicate it and the entire reason anyone was paying the founder, rather than someone else, evaporates.

Before AI, this didn’t matter, because execution difficulty gave founders exactly the time they needed. You spotted a gap, and then you spent years building the thing that exploited it: hiring engineers, developing the product, onboarding customers, signing contracts. That execution phase was slow and expensive, and that slowness was the founder’s protection. Not because nobody else could see the opportunity, Salesforce wasn’t a secret, but because replicating the execution took the same years and the same millions. And during those years of protected exploitation, something crucial happened. Customers built workflows around you. They integrated your product into their stack. They developed the kind of dependency that persists even when a competitor eventually shows up with something similar. The founder used the exploitation window to become load-bearing. By the time competition arrived, the company had a baseline to fall back on.

The execution period wasn’t just a moat around the insight. It was the bridge between being a person with a discovery and being a company with a baseline. And once you had a baseline, you could charge anything up to the cost of switching you out and your customers would rationally stay. That’s subsistence revenue, unconditional and durable. From that foundation, you could start participating in the continuous stream of stochastic discovery that sustains established companies.

AI removes that bridge. Execution no longer takes years. It takes hours. Which means the founder exploits their insight almost instantly, but so does the first replicator, and the second, and the third. The exploitation window collapses below the threshold needed to become load-bearing. The founder makes some money, the gap closes, and they’re back to zero with nothing underneath them.

So the stochastic discovery model isn’t a counterargument to the solo founder problem. It’s actually a precise explanation of why the solo founder model breaks. The same force that makes continuous discovery possible for established companies, fast, cheap, ubiquitous AI, is the force that compresses exploitation windows below the threshold needed to bootstrap a company into existence. Established companies need frequency of discovery. Founders need duration of exploitation. AI dramatically increases frequency and destroys duration in the same stroke.

The solo founder will find real insights. They’ll capture real value, briefly. But a billion-dollar company requires a baseline, a baseline requires becoming load-bearing, and becoming load-bearing requires an exploitation window that no longer exists.

Rarity is the only value

The stochastic innovation model raises an immediate question: if every company has AI deployed, and discoveries are falling out everywhere, doesn’t everything just cancel out?

Partly, yes. Most of what any company’s AI discovers will be discovered by every other company’s AI within days or weeks. A generic supply chain optimisation, a common pricing inefficiency, an obvious process improvement: these will be found by everyone almost simultaneously. And when everyone finds the same thing at the same time, competitive pressure immediately passes the benefit through to customers. No company captures any lasting profit from it.

This is the critical insight: in a world of commoditised execution and ubiquitous AI, the absolute value of a discovery is irrelevant. The only thing that constitutes real economic value is relative advantage. An insight is worth something only for as long as you have it and your competitors don’t. The moment they find it too, its value goes to zero.

Which means rarity is proportional to relative advantage, and relative advantage is the only source of profit. A common discovery has zero competitive value regardless of how much money it saves in absolute terms. A rare discovery, one that only your company finds because of some specificity in your operational context, can be enormously valuable, for as long as it stays rare.

This creates a new picture of what company profit looks like in the AI era. It’s not sustained margins from a durable product. It’s a continuous, overlapping stream of small temporary advantages, each one fading as competitors catch up, but constantly replenished by new discoveries. The speed of decay is increasing (competitors copy you faster). But the speed of discovery is also increasing (your AI finds new things faster). Whether companies can sustain profit depends on whether the discovery rate keeps pace with the decay rate. That’s genuinely uncertain. But it’s not the simple “everything goes to zero” story that the static view suggests.

Specialisation as a statistical edge

This is where the case for specialised AI, which I argued in my previous piece, takes on a new dimension.

If every company runs the same generic frontier model, their discovery distributions are roughly identical. They all find the same things at roughly the same time. The valuable tail of rare, company-specific insights is thin because their systems are all looking at the world through the same lens.

But a company with a specialised system, even one that’s only marginally better at seeing patterns within its specific domain, shifts its entire distribution. It doesn’t just find slightly better recommendations. It finds different ones. Ones that a generic model wouldn’t surface because they require domain-specific context to even recognise as significant.

In a fat-tailed distribution, a small shift in the mean produces a disproportionately large increase in the probability mass at the extremes. If your specialised system shifts the quality of your discoveries by 5%, you don’t get 5% more profit. You might double or triple the amount of time you spend in the profitable tail, finding things nobody else is finding, days or weeks before anyone else stumbles on them by chance.

In my previous piece, I argued that specialised AI is a defence against value extraction by big tech: cheaper to run, more accurate, with structural advantages over general-purpose models. That argument still holds. But in the stochastic framework, there’s an additional dimension. Specialisation doesn’t just protect you. It makes you systematically luckier. It shifts the odds in a game that is fundamentally about who discovers what first.

Who captures this value?

If the stochastic model is right, and companies can sustain profit through continuous discovery, the question shifts from “can companies survive?” to “who benefits?”

Under current ownership structures, the answer is: the same small group of shareholders who benefit now. The AI does the discovering, a skeleton crew of humans oversees operations, and the profits flow to capital. Which recreates the concentration problem from my previous piece at a different level. The companies survive, but the workers don’t, and the wealth concentrates just the same.

One possible response is cooperative ownership. If a company is structured as a cooperative, the operational picture is identical: fully automated, AI-driven, globally competitive, continuously discovering. The only thing that changes is where the profit goes. Instead of flowing to a small group of external shareholders, it’s distributed among the cooperative’s members.

The crucial point is that cooperative ownership doesn’t make a company less competitive. The AI is the same. The discovery rate is the same. The specialised systems are the same. Only the distribution of proceeds changes. This means it doesn’t trigger the cross-border prisoner’s dilemma that kills most policy interventions. A cooperative and a traditionally-owned competitor are operationally indistinguishable. One just shares the proceeds more broadly.

I don’t want to overstate how easy this would be to implement. Capital flight is a real concern. Political resistance would be significant. The transition mechanics are genuinely hard. But structurally, it’s an elegant solution to the distribution problem because it changes the ownership layer without touching the competitiveness layer. That’s rare among economic policy proposals in this space.

What we can and can’t conclude

I want to be precise about where I think this analysis lands.

I’m fairly confident that AI is structurally undermining the protection side of the Aghion-Howitt balance, the mechanism that has sustained innovation and growth for two centuries. The solo unicorn logic demonstrates this clearly: when execution is commoditised, temporary monopoly rents collapse, and the incentive to invest in deliberate innovation erodes. This is serious, and I don’t think it’s widely enough understood.

I’m moderately confident that a new form of innovation is emerging, one driven by stochastic discovery rather than deliberate investment, and that this creates a mechanism by which companies can continue to generate profit even as the old innovation engine breaks down. The key uncertainty is whether the discovery rate can keep pace with the decay rate. If it can, company profits persist. If it can’t, we’re back to the value extraction story with no counter-force.

I’m hopeful, but not confident, that specialised systems provide a meaningful statistical edge in this new landscape by shifting the discovery distribution toward rarer, more valuable insights. And I think cooperative ownership is the most structurally sound response to the distribution problem, though the implementation challenges are real.

What I can’t conclude is that any of this is enough. Even in the best case, where companies survive through continuous discovery, specialisation provides an edge, and cooperatives distribute the proceeds, we’re still looking at an economy where class mobility is essentially frozen. You can’t out-contribute the next person when AI is doing the discovering. You can’t climb a ladder when the rungs are allocated by luck rather than effort. Everyone is locked in place, not because of oppression in the traditional sense, but because there’s simply no mechanism by which individual human contribution translates into differential economic outcome. The meaning that people have historically derived from work, from the sense that effort leads to advancement, would need to come from somewhere else entirely. That’s a cultural and psychological challenge at least as large as the economic one.

And the honest worst case is that the old innovation engine breaks before the new one fully materialises, that the discovery rate can’t keep pace with decay, and that we end up in exactly the concentration scenario I described in my previous piece, with no functioning counter-force and no political mechanism to address it.

The answer is probably somewhere between those two outcomes. Where it lands depends on choices being made right now: by companies deciding whether to build specialised AI, by governments deciding whether to rethink ownership structures, and by all of us deciding whether to take these questions seriously before the window closes.

The betting pool in Sam Altman’s group chat is focused on when the first solo unicorn appears. I think the more important bet is on what kind of economy it appears in, and whether that economy has any mechanism left to sustain the innovation that built it.

Related Articles