US Military Used Claude AI in Iran Strikes Despite Trump’s Ban, What We Know

According to recent reporting, U.S. military forces used Anthropic’s Claude artificial intelligence during strikes against Iran in early March 2026, even though President Donald Trump had just ordered federal agencies to stop using Anthropic’s technology. The AI tool reportedly supported intelligence analysis and target planning in the operation. The use highlights ongoing tensions between government AI policy and military operational needs. 

What the Reports Say

Multiple news sources, citing The Wall Street Journal and other reporting, state that U.S. Central Command used Claude AI in the operation against Iran shortly after President Trump directed federal agencies to cease using AI tools from Anthropic. The deployment occurred within hours of the announcement, amid heightened tensions with Iran and joint U.S.–Israeli military action.

Reports suggest Claude’s role included:

  • Analyzing intelligence data
  • Assisting with identification of potential targets
  • Supporting tactical planning and simulated scenarios

The specific details of how exactly the AI was used — including whether it directly influenced decisions — have not been independently confirmed by the Pentagon.

Background: Trump’s Anthropic AI Ban

On February 27, 2026, the Trump administration ordered all federal agencies to stop using AI technology developed by Anthropic, the company behind Claude, citing security concerns. The directive followed disagreements over military access and ethical limits on how the AI could be used, including restrictions Anthropic placed on autonomous weapons and mass surveillance.

Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk to national security”, which could prevent future defense contracts and compel vendors to cut ties. The order also set a six‑month phase‑out period, meaning some military systems could legally continue to use Claude during that transition as they adopt alternative AI solutions.

Why Claude Was Still Used

Even though the ban was publicly announced, the military’s continued use of Claude in the Iran strikes may reflect:

  1. Operational Dependencies: Claude was already embedded in U.S. military systems, particularly through defense technology partnerships, and cannot be instantly removed without disrupting ongoing operations. 
  2. Phase‑Out Window: The six‑month phase‑out period allows the Department of Defense and its contractors to gradually transition away from Anthropic’s tools while integrating replacements. 
  3. Timing of Actions: The Iran strikes were underway just hours after the ban, likely before an alternate AI solution was fully ready to support similar tasks.

Reports indicate that the Pentagon has since engaged with other AI providers, including a new agreement with OpenAI to supply its models for classified military functions while Claude is phased out.

What Claude Is and How It Was Used

Claude is a large‑language AI model developed by Anthropic that can assist with data analysis, reasoning, and generating insights from complex inputs. Prior reporting has linked Claude’s use to intelligence and planning tasks in military contexts, including an earlier operation to capture Venezuelan President Nicolás Maduro.

In the reported Iran strikes, Claude was described as supporting:

  • Intelligence assessment: Helping analyze vast amounts of surveillance and signals data. 
  • Target identification: Assisting in weighing and prioritizing potential targets. 
  • Battle simulations: Running scenarios to anticipate outcomes of planned actions.

However, none of the coverage provides official Pentagon confirmation of these specific functions, and some reporting notes that use of the AI at this scale remains unverified publicly.

US Military Used Claude AI in Iran Strikes Despite Trump’s Ban, What We Know

Legal and Policy Context

The clash between the government and Anthropic centers on how advanced AI should be used by federal agencies and the military:

  • Anthropic’s leadership insisted on ethical constraints, including avoiding use of its AI in autonomous weapons or mass domestic surveillance. 
  • The Pentagon sought broader rights to use Claude for any lawful defense purpose, leading to conflict with the company.
  • As a result, Trump’s directive aimed to sever ties, but the transition period allows continued use while alternatives are adopted. 

Experts note that government contracts and AI integration in classified systems are complex and cannot be reversed instantly upon a public ban. Switching AI systems in secure networks often requires thorough testing, certification, and compliance reviews, which take time.

What This Means Now

As of early March 2026:

  • The use of Claude by the military in Iran strikes — if confirmed — shows the complexity of immediately enforcing technology bans in federal systems.
  • Claude’s integration into classified defense workflows means the Pentagon must balance operational continuity with political directives. 
  • New agreements with other AI providers like OpenAI signal a shift toward alternative models that may align better with government requirements. 

Frequently Asked Questions

Was Claude AI actually used in a U.S. military strike against Iran?

Multiple reports, citing The Wall Street Journal, say U.S. Central Command employed Claude for intelligence and targeting support during Iran strikes shortly after President Trump’s ban. 

What did President Trump ban?

Trump ordered all U.S. federal agencies to stop using Anthropic’s AI technology, citing national security concerns and a clash over ethical limits on autonomous weapons and surveillance.

Why could Claude still be used?

The government set a six‑month phase‑out period, allowing continued use in some military systems while replacements are integrated. 

What functions did Claude perform?

Reports suggest Claude assisted with intelligence assessment, target selection, and battlefield simulations, though official confirmation is limited.

Is Anthropic completely banned from government work?

Anthropic has been designated a supply chain risk, which could bar future contracts, but existing systems may continue to use its technology during the transition period. 

Is Claude being replaced?

The Pentagon has made deals with other AI providers, including OpenAI, to take over classified applications as Claude is phased out. 

Last Updated: March 2, 2026

This article is for informational purposes only and does not constitute legal advice. It summarizes current reporting and public statements regarding the use of Claude AI in U.S. military operations.

About the Author

Sarah Klein, JD

Sarah Klein, JD, is a licensed attorney and legal content strategist with over 12 years of experience across civil, criminal, family, and regulatory law. At All About Lawyer, she covers a wide range of legal topics — from high-profile lawsuits and courtroom stories to state traffic laws and everyday legal questions — all with a focus on accuracy, clarity, and public understanding.
Her writing blends real legal insight with plain-English explanations, helping readers stay informed and legally aware.
Read more about Sarah

Leave a Reply

Your email address will not be published. Required fields are marked *