Fei-Fei Li selected Google Cloud, which led AI, as the main computing provider for World Labs.

Cloud providers are chasing AI unicorns, the most recent being Fei-Fei Li’s World Labs. The startup chose Google Cloud as its primary compute provider for training its AI models. This is a deal worth hundreds of millions of dollars. However, the fact that Lee served as chief AI scientist at Google Cloud was not a factor, the company said.

At the Google Cloud Startup Summit on Tuesday, the companies announced that World Labs will use a significant portion of the funds to license GPU servers on Google Cloud Platform and ultimately train “spatial intelligence” AI models.

A handful of well-funded startups building AI-based models are making a splash in the world of cloud services. The biggest deals include OpenAI, which trains and runs AI models exclusively on Microsoft Azure, and Anthropic, which uses AWS and Google Cloud. These companies regularly pay millions of dollars for computing services, and as their AI models scale, they may one day need to pay more. This makes Google, Microsoft, and AWS valuable customers to build relationships with early on.

World Labs is building a unique multimodal AI model with clearly significant computational requirements. The startup raised $230 million at a valuation of more than $1 billion in a deal led by A16Z to build its AI world model. James Lee, general manager of startups and AI at Google Cloud, told TechCrunch that World Labs’ AI models could one day process, generate, and interact with video and geospatial data. World Labs calls these AI models “spatial intelligence.”

Li has deep ties to Google Cloud, having led the company’s AI efforts in 2018. But Google denies this deal is a product of that relationship and rejects the idea that its cloud services are just a commodity. Instead, Lee said the abundant supply of AI chips and services such as high-performance toolkits for scaling AI workloads are bigger factors.

“Fei-Fei is clearly a friend of GCP,” Lee said in an interview. “GCP wasn’t the only option they considered. But in the end, they chose us for all the reasons we talked about: our AI-optimized infrastructure and our ability to meet their scalability requirements.”

Google Cloud gives AI startups the option to choose between proprietary AI chips, tensor processing units (TPUs), and Nvidia GPUs, which Google purchases and has more limited supply. Google Cloud is working to get more startups to train AI models on TPUs, primarily as a means of reducing dependence on Nvidia. All cloud providers today are limited by a shortage of Nvidia GPUs, so many are building their own AI chips to meet demand. Although some startups are performing training and inference solely on TPUs, GPUs remain the industry’s preferred AI training chip, according to Google Cloud.

In this deal, World Labs chose to train AI models on GPUs. However, Google Cloud did not reveal what influenced its decision.

“We have been working with Fei-Fei and her product team, and at this stage of the product roadmap, it made more sense to work with us on the GPU platform,” Lee said in an interview. “But that doesn’t necessarily mean it’s a permanent decision. Sometimes (startups) move to other platforms like TPU.”

Lee did not disclose the size of World Labs’ GPU cluster, but cloud providers often dedicate large supercomputers to startups that train AI models. Google Cloud has promised Magic, another AI-based model for startup education. Magic is a cluster of “tens of thousands of Blackwell GPUs,” each more powerful than a high-end gaming PC.

These clusters are easier to promise than to fulfill. Google’s cloud services rival Microsoft is reportedly struggling to meet OpenAI’s massive computing requirements, forcing the startup to leverage other options for computing power.

World Labs’ agreement with Google Cloud is not exclusive. That means the startup can still sign deals with other cloud providers. But Google Cloud has said the majority of its business will move forward.