Trump administration prepares federal oversight of AI model development
The Trump administration is considering an ambitious executive order that would establish federal oversight of the development of new AI models. It marks a sign

The Trump administration is considering an ambitious executive order that would establish federal oversight over the development of new AI models. According to recent reports, this initiative could fundamentally transform the American approach to regulating advanced technologies and increase the degree of federal control over one of the fastest-growing industries.
A Shift in Position
Earlier, the Trump administration criticized strict AI regulation, fearing it would slow American innovation and technological superiority over competitors. The position was clear: minimal government intervention, free market, and competition—that was the path to leadership. However, over recent months, the situation has changed. Pressure from Congress, growing safety concerns about AI systems (from disinformation to potential military use) and intense competitive struggle with China are forcing the administration to reconsider this approach. The executive order could mark the beginning of a more coordinated federal policy regarding advanced AI models capable of affecting national security, critical infrastructure, and the economy as a whole.
Oversight Mechanism
The proposed federal oversight could include a number of key elements aimed at balancing control and innovation:
- Mandatory registration of new large AI models that have reached a certain level of computational power and performance
- Requirements for safety, pre-deployment testing, and documentation of potential risks before broad rollout
- Coordination between federal agencies (NIST, NSF, DoD) to unify the approach
- Monitoring of potential national security and critical infrastructure risks
It is unclear whether this will be oversight in the "approval" style (as in pharmaceuticals) or a softer "notification and disclosure." It is more likely to be a middle ground—a mandatory registration procedure without complete blocking of development.
Balancing Innovation and Control
The administration faces a complex dilemma: on one hand, it is necessary to effectively control powerful systems that may pose risks; on the other hand, there should be no excessive bureaucracy that would slow development and affect the competitive position of the U.S. in the global market. The European AI Act has demonstrated how overly strict regulation can lead to slower innovation and a shift of investments. European companies lag behind American and Chinese leaders precisely because they encountered a stricter regulatory framework earlier than others. The American approach could be more flexible, focusing on high-level risks instead of petty bureaucracy.
What This Means
If the executive order is signed, it will signal a decisive shift from a non-intervention policy to a more active federal role in regulating AI. For startups and major research labs, this means that development processes will require adaptation and compliance with new requirements. Additional testing and documentation procedures may need to be deployed. However, this could also create the long-awaited policy certainty that investors and corporations seeking long-term AI investments are waiting for. It is harder to attract capital in uncertainty. Clear rules, even demanding ones, are often preferable to a complete absence of structure.