The New Lab Proliferation: SSI, Thinking Machines, and the Post-OpenAI Diaspora

As foundation models became strategic, the ecosystem diversified: new labs formed with different governance, safety philosophies, and product goals. This post summarizes the structural reasons new orgs keep appearing.

Why new labs are inevitable

Frontier training combines scarce inputs: top talent, massive compute, proprietary data, and operational know-how. When talented groups disagree on strategy—or simply want autonomy—the easiest path is a new organization.

Different optimization targets

In AI, “what you optimize” defines “what you get.” Different labs may optimize for:

Fast capability progress.

Product reliability and integration.

Safety and controlled deployment.

Open research and publication (to varying degrees).

Safety narratives vs incentives

Safety language can be principled, strategic, or both. The hard part is governance: creating incentives that keep safety work robust even when competitive pressure rises.

Connections

For the technical side of “shaping behavior,” see RLHF and Post-Training. For scaling forces that make frontier work expensive and concentrated, see Scaling Laws. For one prominent split, see Anthropic.