NYSE - LSE
RBGPF 0.12% 82.5 $
CMSD -0.17% 23.16 $
RIO 0.15% 90.35 $
NGG 0.61% 90.41 $
BCC -1.15% 74.49 $
CMSC 0.15% 23.22 $
AZN 0.37% 194.95 $
BCE -0.7% 25.88 $
RELX 0% 35.68 $
GSK 1.8% 55.51 $
BTI 0.79% 58.33 $
JRI 0.08% 12.58 $
VOD -0.21% 14.48 $
RYCEF -1.8% 16.7 $
BP 0.52% 40.65 $
Anthropic sues Trump admin over Pentagon blacklisting
Anthropic sues Trump admin over Pentagon blacklisting / Photo: Brendan SMIALOWSKI - AFP

Anthropic sues Trump admin over Pentagon blacklisting

Anthropic filed suit Monday against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.

Text size:

In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.

In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible."

"Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.

Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei.

The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department.

"The consequences of this case are enormous," the lawsuit states, with the government "seeking to destroy the economic value created by one of the world's fastest-growing private companies."

The suit names more than a dozen federal agencies and cabinet officials as defendants.

The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.

President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology.

Hours later, Hegseth designated Anthropic a "Supply-Chain Risk to National Security" and ordered that no military contractor, supplier or partner "may conduct any commercial activity with Anthropic," while allowing a six-month transition period for the Pentagon itself.

The row erupted days before the US military strike on Iran. Claude is the Pentagon's most widely deployed frontier AI model and the only such model currently operating on the Defense Department's classified systems.

- Arbitrary? -

In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment.

"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint states.

More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday.

Saying they were expressing their opinions as professionals who build, train or study AI and did not represent their companies, they urged the court to side with Anthropic.

"We are united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions," they said in the brief.

Current AI models are not reliable enough to be trusted with making lethal targeting decisions, and putting powerful AI together with all the data available about people threatens to change the fabric of public life in this county, the filing argued.

"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry",the brief contended.

Founded in 2021 by siblings Dario and Daniela Amodei, both former staffers at ChatGPT-maker OpenAI, Anthropic has positioned itself as a safety-focused alternative in the AI race.

P.Serra--PC