AI Architects Alarmed: Where the Cracks in the Industry's Foundation Lie
At the Milken Global conference in Beverly Hills, five major players in the AI industry convened across different supply chain levels—from chip manufacturing to

At the Milken Global Conference in Beverly Hills, five architects of the AI economy met — people who view the industry from all angles: from processor design to model deployment. Together, they acknowledged that the AI industry has faced a series of systemic problems that may require decisive changes to its foundation.
Who Sat at the Table
The panel brought together experts from all critical levels of the supply chain. There were people from chip manufacturing, software development, cloud infrastructure management, and representatives from academic circles engaged in fundamental AI research. This very diversity made the conversation so frank — each could point to bottlenecks visible from their level.
Chips: Bottomless Shortage
The first and most obvious problem is a deficit of advanced processors. Despite efforts by NVIDIA, Intel, Samsung, and other manufacturers, demand for high-performance GPUs remains many times higher than supply. Companies that would like to launch their own AI projects cannot do so due to lack of access to the needed chips. Participants discussed how this is not merely a manufacturing problem — it is a geopolitical and technological trap. Export controls limit equipment distribution, chip costs remain prohibitively high, and delivery times are measured in months.
Energy: Data Centers at the Limit
The second critical problem is electricity. Training large language models requires fantastic volumes of energy — megawatts of power running 24/7. Traditional data centers are approaching physical cooling limits, and electrical grids in many regions are not designed for such loads. Hence the idea of orbital data centers, which participants mentioned: if servers are deployed to space, where there is no atmosphere, one can avoid overheating problems. Solar panels in space operate more efficiently, and energy costs decrease.
- Server overheating requires gigawatts of cooling
- Local electrical grids are not ready for the load
- Traditional solutions are already inefficient
Architecture in Question
But most alarming is the explicit acknowledgment that the current AI paradigm may be flawed or exhausted. The industry has grown in recent years through scaling: more parameters in models, more data, more computing power. This worked, but conference participants began speaking about how the returns curve is already declining. It may require an entirely different architecture — not just larger transformers, but something fundamentally new.
What This Means
When five leading architects of the AI ecosystem simultaneously voice concerns about deficit, energy, and fundamental flaws in the approach — that is a signal. The era when one could infinitely scale and rely on exponential progress is ending. Ahead lies an era of rethinking. The industry must grow out of its childhood and become more systematic and efficient.