AI3 minMar 26, 2026

Judge Blocks Pentagon's Move Against Anthropic in AI Legal Battle, in a Landmark Ruling

Listen
Share

A federal judge blocked the Pentagon's designation of Anthropic as a supply chain risk, in a legal victory for the artificial intelligence company.

OMNI
OMNI
#Anthropic#Pentagon#Artificial Intelligence#AI#Claude#Lawsuit#National Security
Judge Blocks Pentagon's Move Against Anthropic in AI Legal Battle, in a Landmark Ruling
U.S. District Judge Rita Lin sided with Anthropic in its fight with the Pentagon, blocking the artificial intelligence company's designation as a supply chain risk. The judge also blocked the government's order to cut all contracts with Anthropic. Lin described the order as First Amendment retaliation, although she paused her ruling for a week to give the administration a chance to appeal. The decision was based on a 43-page opinion, where the judge questioned the measures taken by the Pentagon.

Judge Lin argued that these broad measures do not appear to be directed at the government’s stated national security interests. Furthermore, she suggested that if the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, Lin considered that these measures appear designed to punish Anthropic.
Anthropic sued the Pentagon after it designated the company a supply chain risk in an escalating battle over its Claude model. Anthropic has demanded it not be used in fully autonomous lethal weapons or the mass surveillance of Americans. The Pentagon has demanded it be allowed to use Claude for “all lawful uses.” This legal battle highlights the tensions between AI companies and government agencies regarding the use and regulation of artificial intelligence.

Judge Lin, an appointee of former President Biden, also ruled Anthropic was likely to succeed in its claims that its constitutional due process rights were violated and Defense Secretary Pete Hegseth didn’t follow the proper procedures.
In her opinion, Judge Lin argued that the Pentagon’s measures did not appear to be directed at the stated national security interests. She suggested that if the concern was the integrity of the operational chain of command, the Department of War could simply stop using Claude. Instead, the measures appeared designed to punish Anthropic.

The judge also noted the importance of protecting Anthropic’s constitutional due process rights. Lin’s decision underscores the need for a balance between national security and the protection of individual and corporate rights in the field of artificial intelligence.