Home Technology A trap created by mankind itself

A trap created by mankind itself

A trap created by mankind itself

On Friday afternoon, as this interview was taking place, a news alert flashed across my computer screen. The Trump administration was cutting ties with Anthropic, a San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth invoked the National Security Act to blacklist the company from doing business with the Defense Department after Amodei refused to allow Anthropic’s technology to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input.

It was a jaw-dropping series. Anthropic will lose up to $200 million in contracts and will be banned from working with other defense contractors after President Trump told Truth Social that all federal agencies “immediately cease use of all Anthropic technology.” (Anthropic later said it would challenge the Pentagon in court.)

Max Tegmark has spent the better part of a decade warning that the race to build more powerful AI systems is outpacing the world’s ability to manage them. The MIT physicist founded the Future of Life Institute in 2014 and helped organize an open letter (ultimately signed by more than 33,000 people, including Elon Musk) calling for a halt to the development of advanced AI.

His view of the human crisis was unrelenting. Like its competitors, the company sowed the seeds of the crisis. Tegmark’s argument began not with the Department of Defense, but with a decision made years ago, a decision shared across the industry to resist binding regulation. Anthropic, OpenAI, Google DeepMind, and others have long pledged to govern responsibly. This week Anthropic removed a core tenet of its safety pledge. It’s a promise not to release increasingly powerful AI systems until the company is confident they won’t cause harm.

Now, without rules, there isn’t much that can be done to protect these players, Tegmark said. Details of that interview, edited for length and clarity, follow. You can hear the full conversation on TechCrunch’s StrictlyVC Download podcast this week.

What was your first reaction when you just saw this news about Anthropic?

The road to hell is paved with good intentions. It’s very interesting to look back 10 years ago, when people were very excited about how we could create artificial intelligence to cure cancer, increase American prosperity, and make America stronger. And now we’re angry at this company because the U.S. government doesn’t want AI to be used for mass domestic surveillance of Americans, and it doesn’t want killer robots that can autonomously decide who gets killed without human input.

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

Anthropic has staked its entire identity on being a safety-first AI company, but it has been working with defense and intelligence agencies (at least since 2024). Do you think this is contradictory at all?

It’s contradictory. If you can be a little cynical about this, yes. Anthropic has been very good at marketing itself as being all about safety. But if you actually look at the facts rather than the claims, you’ll see that Anthropic, OpenAI, Google DeepMind, and xAI have all said a lot about how they think about safety. None of them supported binding safety regulations like in other industries. And all four companies have now reneged on their promises. First there was Google. The big slogan was ‘Don’t be evil.’ Then they dropped it. Then they backed off on another long-term promise, which was basically that they promised to do no harm to AI. They gave it up to sell AI for surveillance and weapons. OpenAI has removed the word safety from its mission statement. xAI shut down its entire safety team. And earlier this week, Anthropic walked back its most important safety promise: not to release powerful AI systems until it was sure they wouldn’t cause harm.

How did companies with such high-profile safety promises get to this point?

All of these companies, especially OpenAI and Google DeepMind, but also to some extent Anthropic, are saying, ‘Trust us. We will regulate ourselves,’ he said, and has continued to lobby for AI regulation. And they lobbied successfully. Therefore, there is currently less regulation of AI systems in the United States than there is of regulation of sandwiches. If you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell sandwiches until you fix it. But ‘don’t worry. I’m not going to sell sandwiches. I would sell an AI girlfriend to an 11 year old. They have been linked to suicide in the past. He will then announce that he is a superintelligence capable of overthrowing the United States government. But I feel good about mine.’ – The inspector said, ‘Okay, go ahead. Don’t sell sandwiches.’

There are food safety regulations, but there are no AI regulations.

And I think all of these companies actually share responsibility. Because if they had taken all the promises they’ve made in the past and figured out how they’re so safe and good and got together and went to the government and said, ‘Please take our voluntary commitments and turn them into American law that binds even our sloppiest competitors’ — this is what would have happened instead. We are in a complete regulatory vacuum. And we know what happens when there is a full corporate amnesty. You get thalidomide, you get tobacco companies forcing kids to smoke, you get asbestos, which causes lung cancer. So it’s kind of ironic that their own resistance to laws defining what’s right and what’s not right when it comes to AI is now coming back to bite them.

There are currently no laws prohibiting building AI to kill Americans, so the government may suddenly be asked to do so. If companies had come forward earlier and said, ‘We want this law,’ they wouldn’t be in this situation. They really shot themselves in the foot.

Companies’ counterargument is always competition with China. If American companies don’t do it, China will. Is that claim valid?

Let’s analyze it. The most common thing that lobbyists for AI companies say – they are now better funded and outnumbered by lobbyists from the fossil fuel industry, the pharmaceutical industry, and the military-industrial complex – is that whenever someone proposes regulation of any kind, they say ‘but China.’ So let’s take a look at it. China is taking steps to completely ban AI girlfriend. In addition to age restrictions, we want to ban all anthropomorphic AI. why? Not because they want to please America, but because they feel this is ruining China’s youth and weakening China. Clearly, it is weakening America’s youth as well.

And when people say we have to race to build superintelligence to win against China, because we don’t actually know how to control superintelligence, the default result is that humanity loses control of Earth to alien machines. The Chinese Communist Party really likes control. Who in their right mind thinks Xi Jinping would tolerate some Chinese AI company creating something to overthrow the Chinese government? No way. It would obviously be a very bad thing for the U.S. government to be overthrown in a coup by the first American company to build superintelligence. This is a threat to national security.

This is a compelling framing that views superintelligence as a national security threat rather than an asset. Do you see that view gaining traction in Washington?

When people in the national security community hear Dario Amodei describe his vision (he famously said there would soon be a nation of geniuses in data centers), they might start to think: Wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m watching. Because it sounds threatening to the U.S. government. And I think that sooner or later, many in the U.S. national security community will realize that uncontrolled superintelligence is a threat, not a tool. This is completely analogous to the Cold War. There was a competition for economic and military dominance against the Soviet Union. We Americans won that race without engaging in a second race to see who could place the most nuclear craters on the other superpowers. People realized it was just suicide. No one wins. The same logic applies here.

What does all this mean for the pace of AI development more broadly? How close do you think we are to the system you describe?

Six years ago, almost every AI expert I know predicted that it would be decades before we had AI capable of mastering language and knowledge at human levels. Maybe 2040, maybe 2050. Everyone is wrong. Because now we already have it. We’ve seen AI advance very quickly in some fields, from high school level to college level to doctoral level to college professor level. Last year, AI won a gold medal at the International Mathematics Olympiad, which is as difficult as human work. Just a few months ago, I wrote a paper with Yoshua Bengio, Dan Hendrycks, and other leading AI researchers laying out a rigorous definition of AGI. According to this, GPT-4 has progressed to about 27%. GPT-5 is about 57% done. So we’re not there yet, but you can quickly see that moving from 27% to 57% may not take very long.

Yesterday, when I was giving a lecture to students at MIT, I said that even if it takes four years to graduate, you might not get a job anymore. It’s definitely never too early to start preparing for it.

Anthropic is now blacklisted. I wonder what will happen next. Are other AI giants going to stand with them and say we won’t do this too? Or will someone like xAI throw up their hands and say Anthropic didn’t want that contract, so we’ll take it? (Editor’s note: Hours after our interview, OpenAI announced its own contract with the Department of Defense.)

Last night, Sam Altman came out and said that he too is a supporter of Anthropic and shares the same red line. I admire your courage in saying that. Google had nothing to say when we started this interview. It would be incredibly embarrassing for us as a company if they stayed quiet, and I think many of our employees would feel the same. I haven’t heard anything from xAI yet either. So it will be interesting to see. Basically, there are moments when everyone has to show their true colors.

Is there a version with actually good results?

Yes, this is why I am actually strangely optimistic. There is a very clear alternative here. If we start treating AI companies like any other company, we need to revoke the corporate amnesty. They will obviously have to prove to independent experts that they know how to conduct and control things like clinical trials before releasing a strong product. Then we will enter a golden age with all the good things about AI without the existential anxiety. That is not the path we are taking now. But it could be so.

Exit mobile version