Google, Microsoft and xAI to let the US review AI models before release
Three of the biggest AI companies — Google, Microsoft and xAI — have agreed to let the US review their new models before public release. The Commerce Department

Google DeepMind, Microsoft, and xAI have agreed to allow the US government to verify new AI models before their public release. This means that preliminary government verification is transitioning from an experimental program to a mandatory step in development for the largest companies.
How the verification works
The Center for AI Standards and Innovation (CAISI) at the US Department of Commerce conducts preliminary assessments of new AI models long before their public release. The organization analyzes system capabilities in coding, mathematics, scientific tasks, identifies potential risks, and works with companies on improving safety before the model goes public. The program began operating in 2024 with the participation of OpenAI and Anthropic. Over the first year and a half, 40 verifications have already been completed. Both companies recently renegotiated their partnerships in accordance with the new priorities of the Trump administration. Google DeepMind, Microsoft, and xAI are joining under new terms — practically all major developers of advanced AI models now coordinate their releases with the state.
Why the government controls AI
The US administration sets several tasks for CAISI: understand what advanced AI systems are truly capable of; identify potential risks before market launch; control the export of AI technologies in the interests of national security. CCISI conducts targeted research alongside companies, verifying models for release readiness. This includes analyzing the system's ability to handle potentially dangerous scenarios and checking what new capabilities have appeared compared to the previous generation. The administration's priorities regarding AI development:
- Ensuring that systems remain controllable
- Complete understanding of advanced models' capabilities
- Transparency before regulators
- Protection of national security
- Control over technology exports
What this means for the industry
Agreement by the largest companies to preliminary government verification means that releasing new advanced models in the US without coordination with CAISI is no longer possible. For companies, this is a slowdown in the development cycle by several weeks before release. For the government — levers of influence over the direction of development. For users — potentially higher safety standards for the models they use every day.
What this means
The era of complete freedom for large AI companies in decisions about model releases is coming to an end. The state is taking a more active role in the governance of the largest technologies that affect millions of people. This may accelerate the development of safety standards, but slow down the pace of innovation.