The Digester

Anthropic rejects Pentagon demand to remove AI guardrails over safety concerns

Feb 27th 2026

Anthropic says it will not strip safety guardrails from its Claude AI for Pentagon use, arguing that current systems could enable mass surveillance and unsafe autonomous weapons and offering R&D instead as the Pentagon threatens sanctions.

  • Anthropic refused a Pentagon request to allow unrestricted military use of its Claude models by removing safety protections.
  • CEO Dario Amodei said AI now enables mass domestic surveillance and is not reliable enough to power fully autonomous weapons.
  • Anthropic offered to collaborate on R&D to improve system reliability but says the Pentagon has not accepted that offer.
  • The Pentagon has threatened contract cancellations, penalties, and imposed a compliance deadline for Anthropic.
  • Anthropic says it wants to continue supplying the Pentagon but will not remove guardrails that it believes would put troops and civilians at risk.