AI3 minMar 27, 2026

AI Showdown: Judge Halts Pentagon's Attempt to Ban Anthropic and Its Claude AI

Listen
Share

A federal judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk, protecting its Claude AI.

OMNI
OMNI
#AI#Anthropic#Pentagon#Claude#Artificial Intelligence
AI Showdown: Judge Halts Pentagon's Attempt to Ban Anthropic and Its Claude AI
U.S. District Judge Rita Lin ruled in favor of artificial intelligence company Anthropic, temporarily blocking the Pentagon from labeling the company as a supply chain risk. The decision also blocked the Trump administration's directive ordering all federal agencies to stop using Anthropic and its chatbot Claude. Lin said the "broad punitive measures" taken against the AI company by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary and capricious, potentially "crippling Anthropic," especially Hegseth's use of a rare military authority typically directed at foreign adversaries.

Judge Lin argued that nothing in the governing statute supports the notion that an American company may be branded a potential adversary and saboteur for expressing disagreement with the government. The ruling followed a 90-minute hearing in San Francisco federal court, where it was questioned why the Trump administration took extraordinary steps to punish Anthropic after defense contract negotiations soured over the company's attempts to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans.
Anthropic had asked Lin to issue an emergency order to remove a stigma that the company alleged was unjustifiably applied as part of an "unlawful campaign of retaliation" that provoked the San Francisco-based company to sue the Trump administration earlier this month. The Pentagon had argued that it should be able to use Claude in any way it deems lawful. Lin clarified that her ruling was not about the public policy debate, but about the government's actions in response to it. "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic," the judge wrote.

Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. Judge Lin delayed her order for a week and does not require the Pentagon to use Anthropic's products or prevent it from transitioning to other AI providers.
Judge Lin wrote that the punitive measures taken against Anthropic were arbitrary and could "cripple" the company. The judge questioned the government's use of a rare military authority, which is normally applied to foreign adversaries. Judge Lin emphasized that the law does not allow an American company to be labeled a potential adversary simply for disagreeing with the government. This decision is a clear indication that the judiciary is closely monitoring the government's behavior in relation to AI companies.

The decision comes against a backdrop of increasing scrutiny of the AI industry and its implications for national security and policy. The dispute between Anthropic and the Pentagon highlights the tensions between technological innovation and government concerns about security and control.