Will the Pentagon’s humanitarianism controversy drive startups away from defense work?

After just a week, negotiations over the Pentagon’s use of Anthropic Claude’s technology collapsed, the Trump administration designated Anthropic as a supply chain risk, and the AI ​​company said it would fight that designation in court.

Meanwhile, OpenAI quickly announced its own deal, sparking a backlash that prompted users to uninstall ChatGPT and push Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive resigned over concerns that the announcement was made hastily and without proper guardrails.

In the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups looking to work with the federal government, especially the Department of Defense. Kirsten wondered, “Will we see some change?”

Sean pointed out that this is an unusual situation in many ways, in part because OpenAI and Claude create products that “no one can shut up about.” And, crucially, this is a debate about “how their technology is or is not being used to kill people,” so naturally this will lead to further investigation.

Nonetheless, Kirsten argued that this is a situation that “must put all startups on pause.”

Read a preview of the conversation below, edited for length and clarity.

Kirsten: I wonder if other startups are starting to look at what’s happened in the federal government, especially the Pentagon and Anthropic. There are debates, wrestling matches, and pauses about whether to pursue (and accept) federal funds. Let’s see a little change in the song.

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

Sean: I’m curious about that too. I think in the short term, to some extent, no. Because when you really try to think about the different companies that work with the government, whether they’re startups or Fortune 500 companies, especially with the Department of Defense or the Department of Defense, it’s not on the radar.

General Motors (GM) makes defense vehicles for the Army and has been at it for a long time, developing all-electric and autonomous versions of those vehicles. That kind of thing goes on all the time, but it never fits the zeitgeist. I think the problem OpenAI and Anthropic faced last week is that these are companies that make products that a lot of people use. And more importantly, no one can shut up.

So there’s a spotlight on them, which naturally highlights their involvement at a level that most other companies that contract with the federal government, especially the war elements of the federal government, don’t necessarily have to deal with.

The only caveat I would add is that the discussions between Anthropic and OpenAI and the US Department of Defense are very heated over very specifically how their technology may or may not be used to kill people, or what part of the mission is to kill people. It’s not just our interest in them and our familiarity with their brands, there’s an additional element that I think is more abstract when you think of General Motors as a defense contractor or something like that.

I don’t think we’ll see Applied Intuition or other companies that have structured themselves as dual-use take a big step back. It’s just that I don’t have a spotlight on it and there’s no shared understanding of what the impact is.

Anthony: This story is in many ways very unique and specific to these companies and people. I mean, there were a lot of really interesting ideas, like: What is the role of technology in government? (Medium) Government AI? And I think those are all good, worthwhile questions to ask and explore.

But I think this is a very interesting lens through which to look at those things, because Anthropic and OpenAI aren’t really all that different in a lot of ways or the positions they’re taking. that ~ no For example, one company might say, “I don’t want to work with the government,” and another company might say, “Yes, I do.” Or some say, “You can do whatever you want.” And (the other one) is to say, “No, I want to set limits.” Both are saying, at least publicly, “We want to limit how AI is used.” It seems like Anthropic is doing more to: You cannot change terms in this way.

And on top of that, there seems to be a layer of personality, with Emil Michael, the CEO of Anthropic and whom many TechCrunch readers may remember from his Uber days, now the Chief Technology Officer of the Department of Defense. Apparently they don’t really like each other. According to reports.

Sean: Yes, there’s a very big “girls are fighting” element here that we shouldn’t overlook.

Kirsten: Yeah, a little bit. But the meaning is a little stronger than that. Stepping back again, what we’re talking about here is a debate between the Pentagon and anthropology. Anthropology appears to have lost this conflict. Although it must be said that they are still heavily used by the military. This is considered an important technology, but OpenAI is involved and it is evolving and will likely change by the time this episode comes out.

This backlash has been interesting for OpenAI, as ChatGPT appears to have skyrocketed by 295% after OpenAI signed a contract with the Department of Defense.

To me, all of this is noise over something really important and dangerous: the Department of Defense was trying to change the existing terms of an existing contract. And this is really important and we need to pause all these startups, especially because the political machine that’s happening right now in the DoD seems different. This is not normal. It takes a long time for contracts to materialize at the government level and the fact that they are trying to change these terms is problematic.