US military reportedly used Anthropic's Claude in Iran strikes despite Trump ban
Mar 2nd 2026
Reports say US forces relied on Anthropic's Claude for intelligence and targeting in the Iran strikes hours after President Trump ordered a federal ban on the company's AI tools.
- Wall Street Journal and Axios reported the US military used Anthropic's Claude for intelligence, target selection, and battlefield simulations during the Iran strikes.
- President Trump ordered all federal agencies to stop using Anthropic's tools hours before the strikes and criticized the company on Truth Social.
- Anthropic objected to military use after Claude was used in the January raid to capture Venezuela's president, citing its terms that forbid violent or surveillance applications.
- Defense Secretary Pete Hegseth accused Anthropic of 'arrogance and betrayal' and demanded full access to its models while allowing up to six months for a transition.
- OpenAI has reached an agreement with the Pentagon to provide its tools, including ChatGPT, for use on the classified network.
- The episode highlights how deeply AI tools are embedded in military systems and the difficulty of rapidly detaching them.