Anthropic Told the DoW Secretary Hegseth to Pound Sand. Now It's Being Treated Like Huawei.
The Department of War designated Anthropic — an American AI company — a "supply chain risk to national security," a label previously reserved for foreign adversaries like Huawei and ZTE. The reason? Anthropic won't remove two contract provisions: no mass domestic surveillance, and no autonomous weapons without human oversight. Anthropic says the designation is legally baseless and will challenge it in court.
Disclosure: I use Claude, Anthropic's AI product, in my own legal practice and work flow including some editing to this article. I use it because it actually works pretty damn well and Claude Code is kind of wildly fun to use. Now that's out of the way!
The federal government just blacklisted one of America's most valuable companies because it wouldn't delete two lines from its terms of service.
On Friday, President Trump ordered every federal agency to stop using Anthropic's technology, calling the company "RADICAL LEFT" and "WOKE" on Truth Social. Defense Secretary Pete Hegseth then designated Anthropic a "supply chain risk to national security" — meaning every military contractor in the country must now certify they don't use Claude in any Pentagon-related work. Hegseth accused Anthropic of "corporate virtue-signaling that places Silicon Valley ideology above American lives" and said the Department of War would transition to "a better and more patriotic service."
Anthropic's response was measured and direct: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court."
They also noted, pointedly, that they learned about the designation from Hegseth's tweet. "We have not yet received direct communication from the Department of War or the White House on the status of our negotiations."
What Anthropic Actually Wants
The dispute is over two provisions in Anthropic's contract with the Department of War. Claude can be used for virtually any military purpose — intelligence analysis, cyber operations, operational planning, and it was reportedly used in last month's Maduro raid. Anthropic draws two lines: no mass domestic surveillance of American citizens, and no fully autonomous weapons that select and engage targets without a human in the loop. These terms were in the original contract. The Pentagon agreed to them when it awarded Anthropic a $200 million deal last July.
Why does Anthropic care about autonomous weapons? According to Fox News' Jennifer Griffin, reporting from the Pentagon meeting between Hegseth and Amodei, Anthropic's concern is concrete: the company doesn't know how its AI models will behave in autonomous kill-chain scenarios. Soldiers could lose control of the system. It could start engaging targets on its own. According to Amodei's statement, Anthropic offered to do joint R&D with the Pentagon to improve reliability for autonomous systems. The Pentagon declined.
On mass surveillance, CEO Dario Amodei pointed out that the government can already purchase detailed records of Americans' movements, web browsing, and associations from commercial data brokers without a warrant — a practice the Intelligence Community itself has acknowledged raises serious privacy concerns. AI supercharges that capability. As Amodei wrote: "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI."
The Supply Chain Risk Problem
The government's biggest legal vulnerability isn't the contract cancellation — it's the supply chain risk designation. Under 10 U.S.C. § 3252, that label can only be applied when necessary to prevent adversary sabotage or subversion of national security systems. It has been used against Huawei, ZTE, DJI, and Kaspersky — companies with documented ties to foreign governments. It has never been applied to an American company.
University of Minnesota law professor Alan Rozenshtein, who wrote the leading legal analysis of the DPA questions for Lawfare, called the designation "very bad and a serious escalation." Georgetown's Center for Security and Emerging Technology researcher Lauren Kahn told CNBC: "I'm really, truly, honestly worried that private companies will say, 'It's not worth our time to work with the defense sector moving forward.'" Even OpenAI CEO Sam Altman — not exactly Anthropic's best friend — said publicly he shares the same red lines and doesn't think the Pentagon should be threatening DPA against AI companies.
The Center for American Progress flagged a logical problem that Amodei himself identified: the government's two threats are inherently contradictory. Either Anthropic is a security risk that should be expelled from government systems, or its technology is so essential that the DPA should be invoked to commandeer it. It cannot be both.
What Happens Next
Anthropic isn't guessing about litigation — they've announced it. And their response statement is already doing the legal work.
Start with the scope. Hegseth declared that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Anthropic's lawyers fired back within hours: under § 3252, a supply chain risk designation only reaches Claude's use on Department of War contracts — Hegseth doesn't have the statutory authority to ban all commercial activity with the company. He claimed power the statute doesn't give him, and Anthropic called it in real time.
Then there's the designation itself. Section 3252 requires that it reduce the risk of adversary sabotage or subversion. Nothing in Hegseth's statement even gestures at that standard. His rationale is explicitly retaliatory: Anthropic wouldn't agree to his terms, so he's punishing them. That's not what the statute is for.
Anthropic will seek a temporary restraining order and preliminary injunction. They need to show likelihood of success on the merits (strong, given the statutory mismatch), irreparable harm (a supply chain risk label threatening a $380 billion company's enterprise relationships and planned IPO is textbook irreparable), and that the balance of equities favors relief.
They'll probably get it. A designation designed for foreign adversary telecom companies, applied to a domestic AI company as admitted retaliation for a contract dispute, with no evidence of adversary exploitation risk, will not survive preliminary judicial review. Hegseth's own words will be Exhibit A. The fact that Anthropic learned about the designation from a tweet rather than through any formal administrative process will be Exhibit B.
Legalish is supported by Lynch LLP — Trademark · Copyright · Patents