
Billionaire media mogul Barry Diller believes OpenAI CEO Sam Altman cannot be trusted, despite recent reports to the contrary. On stage at the Wall Street Journal’s ‘Future of Everything’ conference this week, Diller gave assurances to the AI executive, who has at times been accused by some former colleagues and board members of being manipulative and deceptive.
Diller, who is close to Altman, answered the question of whether people should trust Altman to help artificial intelligence benefit humanity.
In particular, he was asked about a theoretical form of AI known as artificial general intelligence (AGI) that could one day surpass humans at any task.
The media executive, co-founder of Fox Broadcasting and chairman of IAC and Expedia Group, said he believes Altman is sincere in his pursuits, but that it’s not really an area of interest people should focus on. Rather, it is the unknown consequences that will arise from AI.
“One of the biggest problems with AI is over trust,” Diller said. “Trust may not be important because the things that happen are amazing to the people who make them happen. And I’ve spent a lot of time with a lot of different people who were in the creation mode of AI and they were amazed themselves. So…it’s the great unknown. We don’t know. They don’t know,” he explained.
“We’re embarking on something that’s going to change almost everything. That’s not an understatement. Now, I don’t really care whether these massive investments will be made or not. We’re not investing, but we’re going to make progress,” Diller added.
Nonetheless, the media mogul said he believes Altman is a person of integrity and “a decent person with good values,” and that he believes most of those leading the charge are good stewards. (Diller didn’t say which of the AI leaders he considers to be insincere. We’ll have to take note.)
Tech Crunch Event
San Francisco, California
|
October 13-15, 2026
“But the problem isn’t their management. The problem is…it’s really dealing with the unknown. They don’t know what could happen once they get AGI, and we’re getting closer to that. We’re not there yet, but we’re getting closer, and it’s getting faster and faster. And we have to think about guardrails,” Diller said.
Furthermore, he warned that if humans don’t think about guardrails, the alternative is “other forces, namely AGI forces, will do it themselves.” And once that happens, once you let it go, there’s no going back, Diller said.
If you purchase through links in our articles, we may receive a small commission. This does not affect our editorial independence.









