Skip to main content
Diplomatico
Money

Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction

The Defense Department designated Anthropic as a risk to U.S. national security, marking a first for an American company.

editorial-staff
3 min read
Updated 15 days ago
Share: X LinkedIn

Summary

The U.S. Department of Defense has designated Anthropic as a risk to national security, a first for an American AI company. This classification raises questions about the operational viability of AI firms under government scrutiny.

The implications of this designation could affect Anthropic's capacity to operate and innovate within the U.S. market, potentially leading to regulatory hurdles that impact its AI development pipeline.

As the situation unfolds, stakeholders in the AI sector will need to assess the broader impacts on infrastructure and compliance requirements, particularly regarding national security assessments and operational protocols.

Updates

Update at 01:25 UTC on 2026-03-25

AI News reported Federal News Network reports on the scrutiny faced by the Pentagon regarding its classification of Anthropic.

Sources: AI News

Update at 23:19 UTC on 2026-03-26

AI News reported A federal judge has ruled in favor of Anthropic in a legal battle against the Pentagon, marking a significant moment in the ongoing discourse surrounding AI regulations.

Sources: AI News

Update at 23:44 UTC on 2026-03-26

Financial Times reported Judge says classification of group as ‘supply chain risk’ not aligned with stated US national security interests.

Sources: Financial Times

Update at 23:47 UTC on 2026-03-26

AI News reported A federal judge has temporarily halted the Pentagon's classification of Anthropic as a supply chain risk, impacting national security assessments.

Sources: AI News

Update at 00:04 UTC on 2026-03-27

AI News reported A federal judge has issued a temporary injunction preventing the Pentagon from labeling Anthropic as a supply chain risk.

Sources: AI News

Update at 00:14 UTC on 2026-03-27

AI News reported A recent ruling prevents the Pentagon from designating Anthropic AI as a threat to supply chain security.

Sources: AI News

Update at 00:17 UTC on 2026-03-27

NPR News reported A federal judge halts the Pentagon's labeling of Anthropic as a supply chain risk, citing First Amendment concerns.

Sources: NPR News

Update at 00:33 UTC on 2026-03-27

The Verge reported A judge granted Anthropic a preliminary injunction in its lawsuit against the Pentagon's blacklisting.

Sources: The Verge

Update at 00:59 UTC on 2026-03-27

Politico reported A federal judge has temporarily halted the Pentagon's disciplinary actions against the AI company Anthropic.

Sources: Politico

Update at 02:25 UTC on 2026-03-27

Le Monde reported A federal judge in California has granted a preliminary injunction, halting a presidential order that restricted federal agencies from using Anthropic's technology.

Sources: Le Monde

Update at 02:28 UTC on 2026-03-27

France 24 reported A federal judge's ruling suggests that the sanctions imposed on Anthropic may have violated legal standards.

Sources: France 24

Update at 17:59 UTC on 2026-03-27

AI News reported A federal judge has ruled against the ban on Anthropic AI models, criticizing the security risk label as Orwellian.

Sources: AI News

Update at 17:59 UTC on 2026-03-27

AI News reported A federal judge has ruled against the Trump administration's efforts to restrict Anthropic's federal contracts.

Sources: AI News