President Donald Trump on Friday directed all U.S. federal agencies to phase out the use of artificial intelligence technology developed by Anthropic, escalating a public clash over AI safety and national security.
The order follows a standoff between the Pentagon and Anthropic, whose CEO, Dario Amodei, refused to give unrestricted access to the company’s AI systems over concerns that they could be misused for mass domestic surveillance or fully autonomous weapons. Trump, Defense Secretary Pete Hegseth, and other officials criticized Anthropic on social media, calling the company a potential “supply chain risk” and accusing it of jeopardizing military operations.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote. He allowed the Pentagon a six-month period to phase out Anthropic’s AI from existing military platforms.
Anthropic, founded by former OpenAI leaders in 2021, said the government’s actions were unprecedented and legally questionable. “No amount of intimidation or punishment… will change our position on mass domestic surveillance or fully autonomous weapons,” the company said, adding that it plans to challenge the supply chain risk designation in court.
The dispute highlights broader tensions over AI’s role in national security. While Anthropic seeks to enforce ethical safeguards on its AI, the Pentagon demands unrestricted access to deploy the technology in legal defense scenarios. The confrontation comes amid increasing government interest in advanced AI for classified military networks.
Hours after the Anthropic ruling, OpenAI announced a deal with the Pentagon to supply its AI under agreements that enshrine similar safety restrictions. CEO Sam Altman emphasized prohibitions on domestic mass surveillance and ensured human oversight over lethal force, highlighting a potential pathway for balancing AI deployment with ethical safeguards.
The controversy has drawn attention from Silicon Valley, with AI experts, venture capitalists, and executives weighing in. Elon Musk expressed support for Trump’s stance, while Altman voiced solidarity with Anthropic’s safety principles. Retired Air Force Gen. Jack Shanahan warned that escalating conflicts over AI access risk undermining national security, noting that large language models like Anthropic’s Claude are not yet fully reliable for critical military applications.
As the standoff continues, Anthropic faces potential disruptions to government contracts, while the debate underscores the challenges of integrating advanced AI into sensitive defense operations without compromising ethical or legal standards.