US Expands AI Security Reviews with Google, Microsoft, xAI

Conflicting Facts
  • May 5, 2026 at 3:20 PM ET
  • Est. Read: 2 Mins
US Expands AI Security Reviews with Google, Microsoft, xAIAI-generated illustration — does not depict real events

Key Takeaways

The US government has reached agreements with Google DeepMind, Microsoft, and xAI to review early versions of their AI models for national security risks before public release. This initiative builds on previous deals with OpenAI and Anthropic under the Biden administration.

  • The Center for AI Standards and Innovation (CAISI) will conduct these reviews as part of its mission to assess frontier AI capabilities.
  • CAISI has already completed over 40 evaluations, including unreleased models.
  • Anthropic's latest model, Mythos, identified thousands of vulnerabilities in operating systems and web browsers during testing.
  • Treasury Secretary Scott Bessent warned about the growing risks of AI-driven bank account hacks.

The US government has struck deals with Google DeepMind, Microsoft, and xAI to review early versions of their new artificial intelligence models before public release. The Center for AI Standards and Innovation (CAISI), part of the Department of Commerce, announced these agreements on Tuesday.

According to Reuters, CAISI will test the AI tools from Google DeepMind, Microsoft, and xAI before they are released to the public. The initiative aims to identify national security risks tied to cybersecurity, biosecurity, and chemical weapons. As reported by The Guardian and BBC, CAISI director Chris Fall emphasized that 'Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications.' These expanded industry collaborations are seen as crucial for scaling work in the public interest.

This move builds on previous agreements with OpenAI and Anthropic under the Biden administration. According to The Guardian, CAISI has already completed more than 40 evaluations, including on unreleased models. UPI noted that these deals have been renegotiated to fit Trump administration directives.

Anthropic's latest AI model, Mythos, has been a focal point of recent discussions. As reported by The Conversation, Anthropic voluntarily postponed the release of Mythos due to concerns about identified vulnerabilities. The model found thousands of vulnerabilities in operating systems and web browsers during testing. According to Fox News, Mythos identified over 2,000 unknown software vulnerabilities in just seven weeks of testing.

Meanwhile, Treasury Secretary Scott Bessent warned about the growing risks of AI-driven bank account hacks. As reported by Fox News, Bessent highlighted that the U.S. government has gotten involved and that AI companies are working with them to address these threats.

How this summary was created

This summary synthesizes reporting from 8 independent publishers using AI. All sources are cited and linked below. NewsBalance is a news aggregator and media literacy tool, not a news publisher. AI-generated content may contain errors or inaccuracies — always verify important information with the original sources.

Read our full methodology →

Read the original reporting ↓