Briefing: Mythos, Muse, and the Opportunity Cost of Compute
Published: April 13, 2026 | Source: Stratechery by Ben Thompson Original article: https://stratechery.com/2026/mythos-muse-and-the-opportunity-cost-of-compute/ Briefing by: ejsays.com
Core claim: AI has shifted the fundamental economic constraint of tech from marginal cost to opportunity cost. The question is no longer how much does it cost to serve one more user, but what do you give up by serving this user instead of that one. This reframes competition among frontier labs, hyperscalers, and Meta.
Marginal cost vs. opportunity cost: Tech's historic advantage was zero marginal cost — digital output, fixed infrastructure, every additional user essentially free to serve. AI breaks this. Compute is finite. Serving one workload means not serving another. Microsoft CFO Amy Hood confirmed this explicitly: Azure growth was deliberately constrained because Microsoft chose to allocate GPUs to its own Copilot and M365 products first. Had those GPUs gone to Azure, growth would have exceeded 40%.
Anthropic's Mythos — scarcity as strategy: Anthropic announced Claude Mythos, describing it as capable of surpassing nearly all humans at finding and exploiting software vulnerabilities. Access was restricted to a small number of high-capacity partners via Project Glasswing. Thompson identifies two reasons beyond safety: (1) Anthropic is already compute-constrained serving existing models; (2) limiting access protects pricing power against open-source distillation. DeepSeek, Moonshot, and MiniMax ran 16 million exchanges through ~24,000 fraudulent accounts to distill Claude's capabilities — a confirmed industrial-scale operation.
Meta's structural advantage: Meta Superintelligence Labs released Muse Spark, a multimodal reasoning model. Not yet state of the art, but competitive. Meta's unique position: no enterprise cloud business, no compute opportunity cost when serving consumers, and an at-scale advertising business to monetize usage. Thompson argues Meta should open-source Muse — the entities most hurt by a freely available frontier model are other frontier labs, whose pricing power and compute access would be eroded.
Aggregation Theory — still alive? Thompson's original framework held that owning the customer relationship wins in tech. He argues this principle survives the AI transition — owning demand will ultimately trump owning supply. But the compute constraint is real and may last longer than expected.
Compute Allocation: Where Each Player Prioritizes
| Company | First priority | Second priority | Opportunity cost |
|---|---|---|---|
| Microsoft | M365 Copilot, GitHub Copilot | Azure customers | Azure growth deliberately capped |
| Anthropic | Existing Claude models | Mythos (restricted) | Consumer access constrained |
| OpenAI | Enterprise agentic workloads | ChatGPT consumer | Consumer product deprioritized |
| Meta | Consumer (no enterprise) | Open source | Minimal — advertising monetizes usage |
| GCP + Gemini consumer | Anthropic investment | Balanced across both |
Distillation Attack: Scale
| Metric | Figure |
|---|---|
| Fraudulent accounts used | ~24,000 |
| Exchanges generated | 16 million+ |
| Labs identified | DeepSeek, Moonshot, MiniMax |
| Method | Training smaller models on Claude outputs |
Editorial Note (ejsays.com):
Thompson's opportunity cost framework is a genuine insight. The shift from marginal cost to opportunity cost is real and documented — Microsoft's GPU allocation decision is evidence, not theory. His core argument that owning demand ultimately trumps owning supply is a serious position worth taking seriously.
Two gaps worth noting.
First, compute and intelligence are not the same thing. Thompson treats compute allocation as a proxy for AI capability allocation, but more compute does not linearly produce more intelligence. The relationship between the two is still poorly understood. His framework is precise about the resource; it is less precise about what the resource actually produces.
Second, Thompson's demand assumption is closed. His "demand" refers to enterprise and consumer willingness to pay for AI services — and he assumes it remains robust. But that demand sits inside a larger economy. AI systematically eliminating truck driver income, junior analyst roles, and software testing headcount also eliminates consumers. The enterprise logic of using AI to reduce labor costs and the macroeconomic logic of maintaining a consumer base with purchasing power are in structural tension. Thompson does not walk outside the AI industry boundary to see this contradiction. The 90-Nvidia math from this publication's prior analysis suggests the numbers do not close at the macro level. Whether that reckoning arrives slowly or quickly is uncertain. That it does not arrive at all is an assumption, not a conclusion.
The current model is supply-driven. Capital is the fuel. The real question Thompson does not ask: if capital cools, can genuine demand sustain the supply that has been built?