Five architects of the AI ​​economy explain where the wheels fall off.

Earlier this week, five people involved in every layer of the AI ​​supply chain sat down with these editors at the Milken Global Conference in Beverly Hills to talk about everything from chip shortages to orbital data centers to the possibility that the entire architecture underpinning the technology is at fault.

On stage with TechCrunch: Christophe Fouquet, CEO of ASML, a Dutch company with a monopoly on extreme ultraviolet lithography machines for which no cutting-edge chips exist. Francis deSouza, COO of Google Cloud, who is overseeing one of the largest infrastructure investments in corporate history Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company that started in simulation and moved into defense Dimitry Shevelenko, chief business officer of Perplexity, an AI-based search agent company; And quantum physicist Eve Bodnia left academia to challenge the basic architecture that most of the AI ​​industry takes for granted at her startup, Logical Intelligence. (Yan LeCun, Meta’s former chief AI scientist, signed on as founding chair of the technical research committee earlier this year.)

Here’s what five people had to say:

Bottlenecks are real

The AI ​​boom is hitting physical limits, and constraints start at a lower level than many people realize. Fouquet first described the “massive acceleration of chip manufacturing” and expressed his “strong belief” that “the market will be supply-constrained for the next two, three, maybe five years.” This means that the hyperscalers (Google, Microsoft, Amazon, Meta) will not fully get all the chips they are paying for.

DeSouza emphasized just how big this is and how fast it’s growing, reminding the audience that Google Cloud’s revenue grew 63% last quarter, surpassing $20 billion, and its backlog (committed but not yet delivered revenue) nearly doubled from $250 billion to $460 billion in one quarter. “The demand is real,” he said with impressive calm.

For Younis, constraints primarily come from elsewhere. Applied Intuition builds autonomous systems for cars, trucks, drones, mining equipment and defense vehicles, and its bottleneck isn’t silicon. This is data that can only be collected by sending a machine into the real world and observing what happens. “You have to look for it in the real world,” he said, but no amount of synthetic simulations can completely fill that gap. “It will be a long time before we can fully train a model that runs comprehensively in the real world.”

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

Energy problems are also real

If the chip is the first bottleneck, it’s energy that lurks behind it. DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “It gives us access to more abundant energy,” he said. Of course, it’s not easy even in orbit. The space DeSouza observed is a vacuum, eliminating convection and leaving radiation as the only way to dissipate heat into the surrounding environment (a process that is much slower and more difficult to engineer than the air and liquid cooling systems that current data centers rely on). However, the company still considers this a legitimate route.

Somewhat unsurprisingly, the deeper argument de Souza made was about efficiency through integration. Google’s strategy of co-engineering the entire AI stack, from custom TPU chips to models and agents, pays dividends in flops per watt (more computations per unit of energy) that companies buying off-the-shelf components can’t replicate, he suggested. “Running Gemini on TPUs is much more energy efficient than any other configuration,” he said. That’s because chip designers know what the model will be like before it’s released.

Fouquet made a similar point later in the debate. “Nothing can be precious,” he said. The industry is in a strange moment right now, where we’re investing massive amounts of capital out of strategic necessity. But more computing means more energy, and more energy comes at a price.

different kinds of intelligence

While the rest of the industry debates scale, architecture, and inference efficiency within the large-scale language model paradigm, Bodnia is building something very different.

Her company, Logical Intelligence, is based on the so-called energy-based model (EBM). EBM is a class of AI that does not predict the next token in sequence, but instead tries to understand the underlying rules of the data, which it claims is closer to how the human brain actually works. “Language is the user interface between my brain and your brain,” she said. “Inference itself is not attached to any language.”

Her largest model runs with 200 million parameters compared to the hundreds of billions of the leading LLM, and she claims it runs thousands of times faster. More importantly, it is designed to update knowledge as data changes, rather than requiring retraining from scratch.

She argues that EBM is better suited to chip design, robotics, and other areas where systems must figure out physical rules rather than linguistic patterns. “When you drive a car, you’re not looking for patterns in any language. You look around, understand the rules of the world around you, and make decisions.” This is an interesting claim, and one that is likely to attract more attention in the coming months, given that the field of AI is beginning to ask whether scale alone is enough.

Agents, Guardrails, and Trust

Shevelenko spent a lot of time explaining how Perplexity evolved from a search product into what he now calls a “digital worker.” Our latest product, Perplexity Computer, is designed to be an employee directed by knowledge workers rather than a tool for knowledge workers to use. “I wake up every day and have 100 people on my team,” he said of the opportunity. “What are you going to do to make the most of this?”

That’s a compelling argument. I asked them because it also raises obvious questions about control. His answer was granularity. Enterprise administrators can specify which connectors and tools agents can access, as well as whether their permissions are read-only or read-write. This is a very important distinction when agents work within enterprise systems. When Comet, Perplexity’s computational agent, performs tasks on your behalf, it first presents a plan and asks for approval. Shevelenko said that while some users find friction annoying, he believes it is essential, especially after joining the Lazard board. On the Lazard board, he said, he unexpectedly resonated with the CISO’s conservative instincts in protecting a 180-year-old brand built entirely on customer trust. “Granularity is the foundation of good security hygiene,” he said.

Sovereignty, not just safety

Younis offered the panel’s most geopolitically significant observation: that physical AI and national sovereignty are intertwined in ways they are never intertwined in purely digital AI.

The Internet first spread as an American technology, and when the offline consequences became visible, it only faced backlash at the application layer, like Ubers and DoorDash. Physical AI is different. Autonomous vehicles, defense drones, mining equipment, agricultural machinery, and more are appearing in the real world in ways that governments cannot ignore, raising questions about safety, data collection, and who ultimately controls the systems that operate within national borders. “Almost consistently, every country says this: We don’t want this information sitting on our borders in a physical form controlled by another country.” He told the crowd that fewer countries can currently deploy robotaxis than have nuclear weapons.

Fouquet structured it a little differently. China’s AI advancements are real. The launch of DeepSeek earlier this year sent some in the industry into a state of panic. However, that progress is limited below the model layer. Without access to EUV lithography, Chinese chipmakers will be unable to manufacture the most advanced semiconductors, and models built on older hardware will be at a compound disadvantage no matter how good their software gets. “Today, the United States has the data, the computing access, the chips, the talent. China does a very good job at the top of the stack, but it lacks some things below that,” Fouquet said.

generation question

Towards the end of the panel, someone in the audience asked a clearly uncomfortable question. Will all this affect the critical thinking skills of the next generation?

As you might expect from someone who has staked their career on this technology, the answer was optimistic. DeSouza was quick to point out the scale of the problem that humanity could finally solve with more powerful tools. Think of neurological diseases whose biological mechanisms are not yet understood, greenhouse gas removal, or grid infrastructure that has been delayed for decades. “This will take us to the next level of creativity,” he said.

Shevelenko made a more practical point. Entry-level jobs may be disappearing, but the ability to start something independently has been more accessible than ever. “(For) anyone with a Perplexity Computer… the constraint is your own curiosity and agency.”

Eunice made the clearest distinction between knowledge work and manual labor. He noted that the average American farmer is 58 years old and that labor shortages in mining, long-haul trucking and agriculture are chronic and growing. This isn’t because wages are too low, it’s because people don’t want those jobs. In these areas, physical AI will not replace willing workers. It’s about filling a void that already exists and only seems to deepen from here.

If you purchase through links in our articles, we may receive a small commission. This does not affect our editorial independence.