US Military Deployed Claude AI in Iran Strikes Despite Trump’s Ban

Update: 2026-03-02 09:43 IST

In a striking development that underscores the complexities of AI integration within national defense systems, the US Military reportedly relied on Anthropic’s Claude AI model during its recent strikes on Iran—just hours after US President Donald Trump designated the company a “supply chain risk.”

On February 28, American and Israeli forces carried out a coordinated offensive on Iranian targets under the codename Operation Epic Fury. The joint military action reportedly focused on key government installations, including nuclear facilities and strategic military infrastructure. According to reports from The Wall Street Journal and Axios, the US Department of Defense utilised Anthropic’s Claude AI model to assist with intelligence gathering, target identification, and battlefield simulations during the operation.

The timing of Claude’s use has drawn attention. President Trump had instructed all US government agencies to cease use of Anthropic’s AI systems, citing national security concerns. However, defense officials reportedly indicated that an immediate withdrawal was not feasible.

Claude remains the only AI system currently embedded within certain classified US government networks. As a result, a complete transition away from the platform is expected to take up to six months. While the administration has announced plans to phase out the model, operational realities appear to have necessitated its continued short-term use.

The decision to label Anthropic a supply chain risk stems from policy disagreements between the Pentagon and the AI startup. Anthropic CEO Dario Amodei has publicly resisted allowing Claude to be used for domestic mass surveillance or the development of autonomous weapons systems. Following the government’s designation, Amodei stated that the company would challenge the “supply chain risk” label in court.

As the Pentagon moves to replace Claude, OpenAI has reportedly signed an agreement to provide its own AI models for US government use. Still, transitioning advanced AI systems into classified defense environments is a complex and time-intensive process. Officials suggest that OpenAI’s models will require significant testing and integration before they can be deployed effectively.

OpenAI CEO Sam Altman has emphasized that his company also maintains firm boundaries. He insists that OpenAI will not permit its models to be used for domestic mass surveillance or the creation of autonomous weapons—positions that echo Anthropic’s earlier stance.

The episode highlights the growing tension between national security demands and the ethical frameworks established by leading AI companies. It also reflects how deeply AI tools have become embedded in modern defense operations, making rapid policy shifts difficult to implement in practice.

As the US government works through its transition plan, Operation Epic Fury may become a case study in how geopolitical strategy, AI ethics, and executive directives intersect in an era increasingly shaped by artificial intelligence.


Tags:    

Similar News