Industries follow patterns that are remarkably predictable in aggregate and almost entirely useless at the individual level.

This is the central finding of organizational ecology — a body of research that studies how populations of organizations are born, compete, and die. Not individual companies. Populations. The field has been documenting these dynamics across industries for decades, from American breweries to European automobile manufacturers, and the patterns are eerily consistent. The specific companies that survive are unpredictable. The shape of the wave is not.

I encountered these ideas in university, and they changed how I see. I’ve written about that elsewhere . What I haven’t done is show what the lens reveals when you point it at something specific. Whether the dynamics I’m about to describe explain what’s happening in AI or merely describe it is a distinction worth maintaining.

The Legitimacy Desert

In the early 2000s, nobody wanted to be an AI company.

This sounds odd now, but the term carried real stigma. AI had been through two “winters” — periods where the field’s promises dramatically outran its capabilities and funding collapsed. By the mid-2000s, artificial intelligence was widely viewed as a dead end, or at best a research curiosity decades away from practical relevance. The organizations working on what we’d now call AI avoided the label entirely. They were “analytics” companies, “machine learning” startups, “data mining” firms. The technology was real. The category wasn’t.

Organizational ecology has a framework for this. When a new organizational form — a recognizable type of company with characteristic structural features — first appears, it faces a legitimacy problem. Investors don’t know how to evaluate it. Customers don’t trust it. Talent doesn’t see a career path. The form lacks what ecologists call cognitive legitimacy — it isn’t yet taken for granted as a real, viable category. And because it isn’t taken for granted, fewer organizations adopt the form, which keeps density low, which prevents the form from becoming taken for granted. The spiral reinforces itself downward.

This is the core of density dependence theory, formalized by Michael Hannan and Glenn Carroll across a series of studies from the late 1980s onward. Two forces act on any population of organizations simultaneously. Legitimation: every new entrant makes the form more credible, which makes it easier for the next entrant to adopt it. Competition: every new entrant competes for the same finite resources. At low density, legitimation dominates — the rising tide lifts all boats. At high density, competition dominates — the form is established but the space is crowded. Plot founding rates against density and you get an inverted U: rising as legitimation takes hold, falling as competition intensifies.

The AI industry before 2012 was stuck at the bottom of this curve. Density was low. Legitimacy was absent. A few specialist firms — Nuance in speech recognition, Autonomy in enterprise search — survived in niches that the larger technology generalists didn’t bother to serve. IBM, Microsoft, and early Google occupied the center of the market, treating machine intelligence as a feature within broader product offerings rather than the core of an organizational identity. The specialists persisted in their shadow — a dynamic Glenn Carroll called resource partitioning, where generalist dominance at the market center paradoxically creates space for specialists at the periphery.

The category avoidance is the telling detail. When organizations actively dodge a label — when being called an “AI company” is a liability rather than an asset — it suppresses apparent density below even the already-low actual level. The form looks even less legitimate than it is. Each organization that avoids the label reinforces the stigma for every other organization considering the form. It’s density dependence with an additional feedback loop running through the category itself.

The Surge

Then AlexNet happened.

In 2012, a neural network built by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ImageNet competition by a margin that shocked the field. The victory was dramatic enough to function as a proof of concept — not just for image recognition, but for the entire deep learning approach that had been languishing in relative obscurity. Within a few years, the implications had propagated through the research community and into the investment community.

What followed was a textbook legitimation surge. The “AI startup” went from stigmatized label to powerful signal within the span of a few years. Venture capital flooded in. Talent migrated from adjacent fields. Every conference added an AI track. Every corporation added an AI strategy. Each new entrant made the form more credible, which attracted more entrants, which made it more credible still. The spiral that had been reinforcing downward reversed direction.

The density dependence model predicts that founding rates should trace an inverted U against density — rising as legitimation takes hold, then falling as competition intensifies. The AI industry from 2012 to 2017 was firmly on the left side of that curve. Foundings accelerated. The form was gaining legitimacy at a pace that felt extraordinary, even by the standards of the technology industry — though I don’t have the equivalent of Hannan and Carroll’s brewing industry dataset. The broad shape is consistent with the theory, but “consistent with” is a much weaker claim than “explained by.”

But the expected mortality spike didn’t arrive. The model predicts that as density rises, mortality should rise with it — competition intensifying as the space gets crowded. AI startup mortality didn’t follow that pattern in this period. Two dynamics were suppressing it.

First, the carrying capacity of the niche was expanding faster than the population. Density dependence operates relative to carrying capacity — how many organizations the environment can sustain. If the addressable market is growing faster than the number of firms, the competitive effects that normally accompany rising density get delayed. The AI market from 2012 to 2017 was doing exactly this. New application domains — autonomous vehicles, healthcare diagnostics, financial modeling, natural language processing — were opening faster than firms could fill them.

Second, acquisition was removing organizations from the independent population before market selection could act on them. Google acquired DeepMind in 2014. Intel acquired Nervana Systems and Movidius. Apple, Facebook, Amazon — everyone was buying AI startups. In population ecology terms, these firms weren’t dying. They were being absorbed. The headcount of independent organizations fell, but the capabilities were preserved and concentrated within larger hosts.

The New Form

Around 2017, something structurally different began to emerge.

The transformer architecture, published that year, set in motion a series of developments that would culminate in large language models capable of things nobody — including their creators — had fully anticipated. GPT-2, GPT-3, GPT-4 — each iteration dramatically expanded the envelope of what AI could do, and each required dramatically more capital, compute, and specialized talent to build.

The organizations that formed around these models looked like nothing the AI industry had produced before. OpenAI, originally a non-profit research lab, restructured into a capped-profit entity and then pursued further corporate reorganization. Anthropic was founded by researchers who left OpenAI over disagreements about safety and commercial direction. Google merged its Brain and DeepMind divisions into a single unit. Each of these organizations combined structural features that don’t normally coexist in a single firm: the open inquiry norms of a research lab, the revenue ambitions of a commercial platform, the mission-driven governance of a safety institution, and the growth imperatives of a venture-backed startup.

This is a new organizational form. The frontier AI lab shares characteristic properties across its members: massive compute investment, dedicated safety and alignment teams, API-based distribution, an uneasy mix of publication and secrecy, and governance structures that attempt to balance commercial incentives with broader commitments. These properties aren’t accidental. They reflect the specific conditions prevailing when these organizations were founded.

Arthur Stinchcombe, the sociologist whose 1965 work laid much of the groundwork for organizational ecology, argued that organizations bear the stamp of the era of their founding — that the institutional templates and ambient assumptions available at the moment of birth leave a lasting structural imprint. OpenAI’s non-profit origins still echo in its governance battles. Anthropic’s founding charter reflects the safety concerns of researchers who watched OpenAI’s original mission drift under commercial pressure. Google DeepMind carries the imprint of both a British academic AI lab and a Silicon Valley platform company. These founding conditions don’t determine everything, but they constrain what each organization can become in ways that are visible from the outside and felt acutely from within.

The frontier lab form also disrupted one of organizational ecology’s more reliable predictions. Carroll’s resource partitioning theory holds that as markets mature and generalists concentrate at the center, specialists proliferate at the periphery — finding niches the generalists can’t efficiently serve. This has held across dozens of industries. But foundation models broke the pattern. A single model, accessed through an API, could serve a startling range of peripheral niches — legal analysis, medical question-answering, code generation, creative writing — at near-zero marginal cost. The generalist wasn’t just occupying the center. It was reaching into the periphery.

The population itself is inherently tiny. Building a frontier model requires hundreds of millions to billions of dollars, access to a supply chain dominated by a single GPU manufacturer, and a talent pool so narrow that individual researcher departures make headlines. At any given moment, perhaps five to ten organizations worldwide are genuinely operating at the frontier. This is a strange situation for population ecology — a form that’s highly legitimate but extremely sparse. The density is low not because the form lacks legitimacy but because the barriers to entry are extraordinary.

And the form is already showing signs of strain. Each frontier lab faces evaluation from multiple audiences — investors want growth and returns, regulators want accountability and caution, researchers want intellectual freedom, the safety community wants governance and restraint — and satisfying one increasingly conflicts with satisfying the others. Organizational ecologists call this category spanning: the penalty that organizations face when they straddle multiple categories and no single audience can easily assess them. OpenAI’s governance crisis in late 2023 was a visible eruption of these tensions. It won’t be the last.

What Acquisition Does

Organizational ecology was built for a simpler world. In the classical framework, organizations are born and they die. The environment selects among them. The populations that survive are those whose forms fit the conditions. It’s clean.

Acquisition fits nowhere in this framework, and in the AI industry, acquisition has been arguably the dominant population dynamic since 2014.

When an organization dies — goes bankrupt, dissolves, shuts down — its capabilities disperse. Talent scatters. Knowledge fragments. The competitive pressure it exerted on others evaporates. For the remaining organizations, a competitor’s death is a net relief. Space opens up.

Acquisition does something different.

First, it concentrates capabilities rather than dispersing them. When Google acquired DeepMind, it didn’t remove a competitor from the field. It armed one. DeepMind’s research capabilities were preserved and enhanced within Google’s infrastructure — more compute, more data, more resources than DeepMind could have accessed independently. For the remaining independent labs, this was worse than a competitor dying. The acquiring generalist became more formidable while the independent population shrank.

Second, acquisition blurs the boundaries of the population in ways the theory doesn’t handle well. Is DeepMind still a frontier AI lab? It retains many structural features of the form — research teams, publication norms, safety commitments. But it no longer faces market selection in the same way an independent organization does. Its survival depends on internal politics within Alphabet, not on whether customers buy its products or investors fund its work. There’s a useful distinction here between organizational identity — the structural features that make an organization recognizable as a type — and organizational independence. Acquisition eliminates independence while partially preserving identity. But identity without independence erodes gradually, as the host organization’s priorities reshape the acquired entity from within.

Third — and this is the part I find most counterintuitive — acquisition reverses the direction of selection. In normal market competition, the weakest organizations fail. They run out of money, customers, or relevance, and the environment removes them. Acquisition does the opposite. It targets the strongest — precisely because capability, talent, and strategic position are what make an organization an attractive target.

Market selection kills the weakest. Acquisition removes the strongest.

This inverted selection reduces diversity faster than natural competition would. The population doesn’t just shrink. It loses its upper tier.

If you wanted to formalize this, you might modify the density dependence model so that “effective density” accounts not just for the number of independent organizations but for the capabilities sitting inside generalist hosts. This would predict that industries with high acquisition rates feel competitive pressure earlier than simple headcounts suggest, because the capabilities are still in play — they’re just concentrated in fewer, larger entities. The AI industry’s competitive intensity feels disproportionate to its small number of independent players. This might be part of why.

The Concentration Thesis

There is a version of the AI story where none of this population-level analysis matters, because one organization wins and everyone else becomes irrelevant.

The argument goes like this. A lead in AI capabilities creates a self-reinforcing loop. The best models attract the best researchers, who want to work where the most powerful systems exist. The best models attract the most customers, generating revenue. Revenue funds more compute. More compute and better talent produce the next capability jump. And the recursive element — the AI itself accelerates the research, writing code, analyzing experiments, suggesting architectures. The output improves the production function. The leader doesn’t just advance. It accelerates away from the field at an increasing rate, and the gap becomes structurally unbridgeable.

This is the strongest challenge to the ecological view, because if it’s right, there is no meaningful population. There’s a monopolist.

I don’t think it holds, for reasons that are structural rather than optimistic.

Capability leads in AI have not persisted. GPT-4 was widely regarded as the clear frontier when it launched in early 2023. Within eighteen months, models from Anthropic, Google, Meta, and DeepSeek had substantially closed the gap. The key inputs to frontier research — algorithmic ideas, compute, training data — are not proprietary in the way the concentration thesis requires. Ideas diffuse through publication and researcher mobility. Compute is expensive but purchasable. Training data is largely drawn from the same public internet. The niche overlap between frontier labs is enormous — they depend on the same resources, and no single player can monopolize them. And scaling itself is hitting diminishing returns. If doubling your compute budget yields only incremental capability gains, the advantage of resource dominance shrinks. Capability turns out to be multi-dimensional in a way that resists single-track ranking — one model leads on coding, another on creative writing, a third on multilingual performance. This differentiation space sustains an oligopoly, not a monopoly.

The talent loop is leaky. Researcher motivations are heterogeneous — intellectual freedom, mission alignment, publication norms, equity upside, the desire to build something rather than optimize something. Dominance creates the same organizational pathologies it creates everywhere: bureaucracy expands, politics intensify, individual agency contracts. The pattern is old. Xerox PARC. Bell Labs in its later years. Brilliant people leaving a dominant institution because the institution can no longer use them well. Anthropic was founded by researchers who left OpenAI. DeepSeek’s architectural innovations came from outside the Western frontier labs entirely. As frontier labs grow and succeed, they accumulate what Hannan and Freeman identified as the signature of successful organizations — reliable structure, accountable process, reproducible output — which are also the properties that make them rigid when the paradigm shifts and unattractive to talent that thrives on agility.

And if AI tools accelerate AI research, they do so for every well-resourced group, not just the leader — recursive improvement compresses catch-up time rather than extending the leader’s advantage. Meanwhile, dominance triggers its own counterpressures: antitrust scrutiny across multiple continents, regulatory overhead that scales with market position, and the paradoxical opening of niches — regulated markets, sovereign AI initiatives, vertical applications — that a dominant organization cannot efficiently serve.

There is one version of this argument I can’t fully rebut. If a frontier lab’s internal research AI generates insights that are genuinely non-obvious, hard to reverse-engineer from published results, and that compound across model generations, the knowledge production process itself could become opaque to outsiders. Diffusion would slow not because of deliberate secrecy but because the knowledge doesn’t decompose into transferable units — you can’t learn it from a paper because the paper can’t describe it.

This is the scenario where I’d be wrong about everything else. If internal recursive improvement sustainably outpaces knowledge diffusion — something that has not occurred in any previous technology industry — the ecological framework breaks.

I don’t think we’re there. The bottleneck in frontier research remains human-directed: choosing which experiments to run, interpreting surprises, making bets about where to invest. AI accelerates the execution of research but adds least value to the judgment calls that determine whether you’re working on the right problem. And internal toolchains optimized for the current paradigm may deepen your commitment to an approach rather than help you escape it when it plateaus.

The difference between the ecological prediction — oligopoly — and the concentration prediction — monopoly — depends on whether this threshold gets crossed. I suspect it won’t. I don’t know that it won’t.

The most likely outcome, in my assessment, is a tight oligopoly of three to five frontier organizations, partially differentiated, competing intensely on shared resources. This is the pattern observed in other capital-intensive, technology-driven industries: aviation, semiconductors, cloud computing. The dynamics specific to AI — recursive improvement, extreme capital requirements, narrow talent pools — compress the timeline and raise the stakes, but they don’t change the structural logic.

What Comes Next

If the frontier AI lab is a transitional form — and the category-spanning tensions, accumulating inertia, and structural contradictions suggest it is — then the question is what succeeds it.

The hybrid identity that made the frontier lab viable in its founding conditions becomes increasingly unstable as those conditions change. Research labs want to publish. Commercial platforms want to ship. Safety institutions want to slow down. Startups want to grow. The frontier lab tries to do all four, and for a while it can — when the technology is advancing fast enough and the capital is flowing freely enough, the tensions are manageable. But each structural commitment the lab accumulates — API contracts, safety review processes, revenue targets, governance obligations — constrains its future flexibility. When an organization eventually attempts fundamental restructuring, it temporarily loses the competencies and audience trust that sustained it, elevating its mortality risk during the transition. The longer it delays, the deeper the commitments and the more dangerous the attempt.

My expectation is that the form splits into clearer successor types, and that the split is already beginning.

The first is the AI infrastructure utility — an organization focused on reliable, scalable, regulated delivery of AI capabilities. This is what happens when the commercial platform function wins the internal tug-of-war. Utility-style governance, predictable pricing, reliability commitments, regulatory compliance as a core competency. The most likely path to this form is acquisition by adjacent incumbents who already possess the template: Amazon, Microsoft, and Google all operate cloud infrastructure businesses with the governance and operational DNA that an AI utility requires. The current partial-dependency arrangements — Microsoft’s investment in OpenAI, Amazon’s investment in Anthropic — look, from this angle, like transitional structures. They resolve into full acquisitions or they fracture. The ecological logic favors resolution, because the acquirer provides the structural template that the frontier lab cannot build internally without triggering the very instability it’s trying to avoid.

The second is the pure research institute — an organization focused on fundamental capability research and safety work, freed from the pressure of market selection by state patronage. This has historical precedent. After World War II, the United States created a network of federally funded research and development centers — RAND, Sandia, the Jet Propulsion Laboratory — that pursued long-horizon research without commercial revenue pressure, sustained by defense budgets and justified by national security imperatives. International science institutions like CERN serve a similar function through multilateral funding. If governments decide that frontier AI research is a strategic priority — and several already have — the creation of state-sponsored AI research institutes follows naturally. These would absorb the research function that the frontier lab currently houses alongside its commercial operations.

This form carries its own pathologies. State patronage insulates from market selection but exposes to political selection — funding depends on legislative priorities, which shift. Bureaucratic ossification sets in as accountability structures multiply. Secrecy norms can conflict with the open publication culture that drives scientific progress. The postwar FFRDCs experienced all of these.

The most likely transition mechanism is what the acquisition section already described — frontier labs absorbed into larger firms that provide the structural template. The partial-dependency arrangements already in place push toward this resolution. But two other paths are worth naming.

The first is new entry by organizations designed from the ground up for the emerging conditions. Purpose-built AI utilities or research institutes, founded without the hybrid baggage. Initially unimpressive compared to frontier labs — Stinchcombe’s liability of newness applies, since new organizations lack established routines and audience trust — but structurally better adapted to the environment that’s forming. When the environment shifts to favor a new form, organizations born into it have an advantage over incumbents attempting to retrofit.

The second — least likely but most celebrated when it occurs — is successful internal restructuring. A frontier lab manages genuine transformation into one of the successor forms without the transition itself proving fatal. If it happens, it most likely involves a lab with strong external institutional support and a clear adjacent template. Google DeepMind within Alphabet’s cloud infrastructure is the most plausible candidate. This would be the kind of exception that business school case studies are written about — noteworthy precisely because organizational ecology predicts it to be rare.

There is a speculative possibility that I find interesting but can’t defend with evidence. Every major coordination technology has eventually produced organizational forms that would have been structurally impossible without it. Bureaucracy requires written records. The multidivisional corporation requires telecommunications. Open-source projects require the internet. If AI is a genuinely new coordination technology, it would be surprising if it didn’t eventually produce a genuinely new organizational form — not an organization that uses AI, but one whose coordination structure is constitutively shaped by it. An organization where AI doesn’t just route human decisions but makes routine coordination decisions, adapts its own processes, generates its own routines. The difference between a road network and an autonomous transit system.

I’m not sure this distinction is real yet. Current AI doesn’t coordinate in a way that’s qualitatively different from the algorithmic coordination already present in quantitative trading firms or platform companies. The Skunk Works, the quant trading desk, the open-source community — each was a small team with powerful tools and less bureaucracy, enabled by its era’s coordination technology. What I’ve described might just be the next iteration of the same pattern. It would be competitively important but not a new organizational form in the ecological sense. Claiming more outruns the evidence.

This is where the framework reaches its boundary. Organizational ecology tells us that new forms emerge from variation and selection. It doesn’t tell us what they look like before they exist.


The researchers who built organizational ecology were not trying to help companies survive. They were trying to describe what actually happens — trading prescriptive comfort for descriptive honesty. The framework doesn’t tell you what to do. It tells you where the structural pressures are, where the inertia lies, and which scenarios are more or less plausible given decades of evidence across industries that look nothing like each other and yet behave, at the population level, in strikingly similar ways.

Applied to AI, the lens suggests that the organizations dominating this era will most likely not dominate the next — not because they lack talent or resources or awareness, but because the structural commitments that made them successful in this paradigm will make them rigid in the face of the next one. This is the most counterintuitive and the most empirically robust prediction the field makes. It has held across brewing, newspapers, automobiles, telecommunications. Whether AI is the exception remains genuinely open.

I’ve been carrying these ideas for a long time. They haven’t given me any ability to predict which companies will thrive or fail — that’s not what population-level thinking does. What they’ve given me is a vocabulary for dynamics that are otherwise invisible, and a persistent skepticism toward anyone claiming to know how this ends. The patterns are legible. The specifics never are.

The story is the same. The clock is faster. And the organizations that see it most clearly will, as always, mostly fail to get out of the way.