Anthropic and the Split from OpenAI

The modern AI landscape is shaped by people moving between labs and by differing views on safety, governance, and product pacing. Anthropic is one of the most visible examples of a major split in the frontier ecosystem.

Why splits happen

As labs scale, disagreements can arise about risk tolerance, deployment strategy, governance, and which research bets to prioritize. Because frontier training is expensive, these disagreements are amplified: you can’t easily “agree to disagree” while sharing one compute budget.

Safety framing as a product and research strategy

Safety work spans data, post-training, evaluation, and policies. Some labs emphasize formalized principles and systematic preference shaping; others emphasize iterative deployment and feedback. In practice, most successful deployments blend both.

Connections

For the post-training mechanics that underlie many safety approaches, see RLHF and Post-Training. For the broader ecosystem of new labs, see New Labs.