Noah Smith: Pentagon's Case Against Anthropic Is Right — AI May Become a Superweapon That Only Nation-States Should Control
Economist Noah Smith published an essay on March 6, 2026 arguing that the Pentagon's designation of Anthropic as a supply chain risk is morally and strategically defensible — reasoning that if frontier AI becomes a genuine superweapon, nation-states have a legitimate interest in monopolizing access to ensure its use remains subject to the laws of war and democratic accountability. Smith acknowledges Anthropic's position is sympathetic but contends that the precedent of private companies independently deploying weapon-class technology without government oversight is more dangerous than the designation itself. The essay attracted significant Techmeme attention and adds an important dissenting voice to the prevailing tech-community framing of the dispute as purely political overreach.
Key Takeaways
- Smith's central argument: if frontier AI models can enable mass-casualty superweapons (bioweapons, cyberattacks on critical infrastructure), then the existing laws-of-war framework requires that access be monopolized by nation-states — as it is with nuclear, chemical, and biological weapons
- Essay explicitly supports the Pentagon's supply chain risk mechanism as a legitimate tool for that monopolization, while acknowledging Anthropic's good-faith position and its track record on safety
- Published at Noahpinion (noahpinion.blog) March 6, 2026; featured on Techmeme as a counterweight to coverage framing the Anthropic-Pentagon dispute as pure political intimidation
Original source: Noahpinion