In brief

  • Anthropic sued federal agencies after being labeled a national security “supply chain risk.”
  • The dispute stems from the company’s refusal to allow unrestricted military use of its AI.
  • The designation bars Pentagon contractors from doing business with the firm.

Anthropic has turned to the federal courts to fight a sweeping blacklist by the Donald Trump administration, claiming the government branded the AI startup a national security threat in retaliation for its refusal to relax safety protocols.

The lawsuit, filed Monday in the United States District Court of Northern California, challenges actions taken after President Trump directed federal agencies in February to stop using Anthropic’s technology. This followed public comments from Anthropic CEO Dario Amodei, who said the company would not comply with the Pentagon’s request for unrestricted access to Claude. The complaint names multiple federal agencies and senior officials as defendants, including Defense Secretary Pete Hegseth, Treasury Secretary Scott Bessent, and Secretary of State Marco Rubio.

“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” attorneys for Anthropic said in the lawsuit. “No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”

The dispute began in January when Pentagon officials demanded AI contractors allow their systems to be used for “any lawful use,” including military applications. While Anthropic had by then already entered into a $200 million contract with the Department of Defense, it refused to remove two safeguards prohibiting the use of Claude for mass domestic surveillance of Americans or for fully autonomous lethal weapons systems.

“The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance,” attorneys for Anthropic stated in the lawsuit.

For AI developers, including SingularityNET CEO Ben Goertzel, the designation is an odd choice and doesn’t fit with the typical meaning of a supply chain threat, something usually reserved for software from adversaries that could contain hidden malware, viruses, or spyware.

“Anthropic not being willing to have their software used for autonomous killing or mass surveillance doesn't seem to pose a risk of that nature,” Goertzel told Decrypt. “That just means if you want to use software for autonomous killing or mass surveillance, then buy somebody else's software. So the logic of making it a supply chain risk eludes me.”

Goertzel said differences among leading AI models may limit the practical impact of the decision.

“In the end, Claude, ChatGPT, and Gemini are not that far off from each other,” he said. “As long as one of these top systems is being used by the U.S. government, it's all about the same thing. And the intelligence agencies, under the cloak of top secret clearance, would use the software however they wanted.”

Anthropic is asking the court to declare the government’s actions unlawful and block enforcement of the “supply chain risk” designation that prevents federal agencies and Pentagon contractors from doing business with the company.

“There is no valid justification for the Challenged Actions,” the lawsuit said. “The Court should declare them unlawful and enjoin Defendants from taking any steps to implement them.”

Anthropic did not immediately respond to requests for comment by Decrypt.

Even after designating Anthropic a risk to national security, Claude has been used in ongoing military operations, including by U.S. Central Command to help analyze intelligence and identify targets during strikes on Iran.

Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, said in a statement shared with Decrypt that the case raises concerns about constitutional protections when national security claims are used to justify government action.

“While the courts have been hesitant in the past to question the government’s claims of national security concerns, the circumstances of this case certainly highlight the real risk to the First Amendment rights of Americans if the underlying considerations of such claims are not thoroughly scrutinized,” she said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.