Microsoft has thrown its support behind artificial intelligence company Anthropic in a legal battle against the Trump administration after the Pentagon blacklisted the firm. The dispute stems from Anthropic's refusal to allow its Claude AI model to be used for autonomous lethal warfare and mass surveillance of Americans, leading to a months-long standoff with the Pentagon.
Key Takeaways
Microsoft has backed AI company Anthropic in its legal challenge against the Trump administration after the Pentagon blacklisted it for refusing to allow its AI model Claude to be used for autonomous lethal warfare and mass surveillance.
- Microsoft warns Pentagon's action could disrupt U.S. military operations and undermine national AI leadership
- Anthropic sued federal government, calling designation unlawful and seeking to block it
- More than three dozen AI industry insiders from OpenAI and Google support Anthropic in amicus brief
- Pentagon Chief Technology Officer Emil Michael rules out negotiations with Anthropic
The Pentagon retaliated by designating Anthropic as a national security supply-chain risk, typically reserved for organizations from foreign adversary countries like Huawei. In court filings, Microsoft warned that this designation could disrupt U.S. warfighters and imperil the country's drive to lead in AI. The company argued that the blacklisting was an unprecedented response to a contract dispute that could disrupt the American military's ongoing use of advanced AI.
Anthropic filed suit in federal court seeking to have its designation declared unlawful and blocked, according to enca.com. More than three dozen AI industry insiders from OpenAI and Google also argued in support of Anthropic in an amicus brief filed with the court. They expressed their opinions as professionals who build, train or study AI and urged the court to side with Anthropic.
The row erupted days before U.S. military strikes on Iran, where advanced AI tools are being used by warfighters according to aljazeera.com. The Pentagon confirmed using various AI tools to help process data and make decisions faster than the enemy can react. However, concerns have been raised over the use of AI in war, particularly after Israel's heavy reliance on AI during its conflict with Gaza.
Pentagon Chief Technology Officer Emil Michael ruled out negotiations with Anthropic after labeling it a supply-chain risk according to reuters.com. 'There's no chance,' Michael said. He accused Anthropic of bad faith negotiation and leaking information, stating that their leadership does not want to reach an agreement.
In its complaint, Anthropic explained the limits and hesitations behind its own technology. The company stated it did not have confidence that Claude would function reliably or safely if used to support lethal autonomous warfare according to theguardian.com. These usage restrictions are rooted in Anthropic's unique understanding of Claude’s risks and limitations.
How this summary was created
This summary synthesizes reporting from 11 independent publishers using AI. All sources are cited and linked below. NewsBalance is a news aggregator and media literacy tool, not a news publisher. AI-generated content may contain errors or inaccuracies — always verify important information with the original sources.
