Anthropic, an artificial intelligence company, filed lawsuits on Monday against the Trump administration after being designated a 'supply chain risk' by the Pentagon. The designation came after Anthropic refused to allow unrestricted military use of its AI technology, particularly in autonomous weapons and mass surveillance.
Key Takeaways
Anthropic has filed lawsuits against the Trump administration after being designated a 'supply chain risk' by the Pentagon due to its refusal to allow unrestricted military use of its AI technology. The company argues that this designation violates its First and Fifth Amendment rights and seeks to have it declared unlawful.
- Anthropic was labeled a supply chain risk for refusing to allow Claude's use in autonomous weapons or mass surveillance
- The lawsuit claims the Pentagon violated First Amendment free speech rights and deprived due process under the Fifth Amendment
- More than 30 OpenAI and Google DeepMind employees filed a statement supporting Anthropic's lawsuit
The 48-page complaint, filed in federal court in San Francisco, seeks to have the designation declared unlawful and blocked. According to Anthropic, the company was founded on the principle that its AI should be used to maximize positive outcomes for humanity and should be as safe and responsible as possible.
The Pentagon's decision blocks Anthropic from working with defense vendors and contractors, potentially destroying economic value created by one of the world's fastest-growing private companies. The suit names more than a dozen federal agencies and cabinet officials as defendants.
Anthropic argues that the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment. The company also filed a separate lawsuit in the U.S. Court of Appeals for the D.C. Circuit.
Anthropic has sought to convince businesses and other government agencies that the Trump administration's penalty is narrow and only affects military contractors using its AI chatbot Claude for work with the Department of Defense. The company projects $14 billion in revenue this year, with more than 500 customers paying at least $1 million annually.
More than 30 OpenAI and Google DeepMind employees filed a statement on Monday supporting Anthropic's lawsuit against the U.S. Defense Department. The brief, signed by Google DeepMind chief scientist Jeff Dean, states that 'The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.'
The amicus brief in support of Anthropic showed up on the docket a few hours after the Claude maker filed two lawsuits against the DOD and other federal agencies.
How this summary was created
This summary synthesizes reporting from 18 independent publishers using AI. All sources are cited and linked below. NewsBalance is a news aggregator and media literacy tool, not a news publisher. AI-generated content may contain errors or inaccuracies — always verify important information with the original sources.
