Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute

Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute

Anthropic was designated a โ€œsupply chain riskโ€ by the Department of War after negotiations collapsed over requests to prohibit its AI model Claude from being used for mass domestic surveillance and fully autonomous weapons. The designation triggered federal orders to phase out Anthropic technology, widespread employee support from other AI firms, and Anthropicโ€™s claim that the move is legally unsound while OpenAI reports a separate agreement with the DoD. #Anthropic #Claude

Keypoints

  • The Department of War directed a supply chain risk designation for Anthropic after failed talks with the company.
  • Anthropic refused exceptions that would allow its model Claude to be used for mass domestic surveillance or fully autonomous weapons.
  • President Trump ordered federal agencies to phase out Anthropic technology and the Pentagon told contractors to cease commercial activity with Anthropic.
  • Anthropic called the designation legally unsound and said any supply chain risk finding under 10 USC 3252 would only apply to DoW contracts.
  • OpenAI says it reached an agreement with the DoD while hundreds of Google and OpenAI employees urged their companies to back Anthropic.

Read More: https://thehackernews.com/2026/02/pentagon-designates-anthropic-supply.html