Barry Diller defends Altman, but warns of the danger of AGI
Media mogul Barry Diller both defends Sam Altman and worries about AGI. In his view, personal trust in the OpenAI chief matters little when it comes to the unpr

Barry Diller defended Sam Altman, but with a caveat that sounds like a warning: trust in humans is irrelevant when AGI is on the horizon.
Defense and Alarm Simultaneously
The media mogul and IAC founder did not speak critically about OpenAI's CEO. Diller acknowledged Altman's competence. But then added a hard caveat: as AGI approaches, leadership competence is not the solution. Trust in humans becomes irrelevant in the face of a technology that is difficult or impossible to fully control.
Why Trust Is Irrelevant
AGI is a hypothetical intelligent system that would surpass human intelligence across all domains. If AGI truly arrives, then the motives, competence, and honesty of an individual would prove to be insufficient protection. A system surpassing human intelligence could spiral out of control regardless of its creators' intentions.
- Even a conscientious leader cannot guarantee AGI safety
- The problem lies in the very nature of AGI, not in the quality of management
- Technical constraints are required, built into the system itself
"Trust is irrelevant when it comes to AGI" — the essence of
Diller's position, though he does not deny Altman's professionalism.
Guardrails as the Only Path
Diller emphasizes the necessity of "guardrails" — control and restriction mechanisms that must be embedded in AGI at the level of architecture and algorithms. This is not a matter of management and not a matter of company culture. This is a matter of technical safety. Such mechanisms must prevent the system from deviating from its intended goals, regardless of who controls it. Guardrails operate automatically, at the level of code and algorithms, not at the level of human decision.
What This Means
Diller's statement reflects a growing consensus in the AI community: preparing for AGI must not be done through selecting good people for leadership positions, but through building technology that is safe by default. Protection from AGI is not a question of trust in Sam Altman or any other leader. It is a question of engineering.