This week in AI news: VCs (and developers) are excited about AI coding tools.

Hello, everyone, and welcome to TechCrunch’s regular AI newsletter. Sign up here to receive this newsletter every Wednesday.

This week in AI, Magic and Codeium, two startups that develop tools to generate and suggest code, raised a combined total of nearly $500 million, a large round even by AI standards, especially considering that Magic has yet to launch a product or generate revenue.

So why are investors so excited? Well, coding is not an easy business, nor is it cheap. And there is a demand for ways to streamline the more arduous processes surrounding coding, both among companies and individual developers.

According to one survey, the average developer spends nearly 20 percent of their time maintaining existing code instead of writing new ones. A separate study found that excessive code maintenance (including addressing technical debt and fixing poorly performing code) costs $85 billion in lost opportunities each year.

Many developers and companies believe that AI tools can help here. And if they are worth it, consultants agree. In a 2023 report, McKinsey analysts wrote that AI coding tools could enable developers to write new code in half the time and optimize existing code in about two-thirds of the time.

Now, coding AI is not a panacea. The McKinsey report also found that certain, more complex workloads, such as those that require familiarity with a particular programming framework, do not necessarily benefit from AI. In fact, junior developers longer According to the report's co-authors, there are differences in the time it takes to complete some tasks with and without AI.

“Participant feedback suggests that developers actively iterated on the tool to achieve (high) quality, suggesting that the technology is best used to augment developers rather than replace them,” the co-authors wrote, emphasizing the point that AI cannot replace experience. “Ultimately, to maintain code quality, developers need to understand the properties that constitute quality code and provide the correct output to the tool.”

AI coding tools also have unresolved security and IP-related issues. Some analyses have shown that these tools have resulted in more bad code being pushed into codebases over the past few years. Meanwhile, code-generating tools trained on copyrighted code have been caught spitting out that code when requested in certain ways, creating liability risks for developers who use them.

But that won't dampen developers' or their employers' enthusiasm for AI coding.

A 2024 GitHub poll found that the majority of developers (over 97%) have adopted AI tools in some form. The same poll found that 59% to 88% of companies encourage or currently allow the use of assistive programming tools.

So it’s no surprise that the AI ​​coding tools market could reach $27 billion by 2032 (according to Polaris Research), especially considering Gartner predicts that 75% of enterprise software developers will use AI coding assistants by 2028.

The market is already hot. Generative AI coding startups Cognition, Poolside, and Anysphere closed huge rounds last year, and GitHub’s AI coding tool Copilot has over 1.8 million paid users. The productivity gains these tools can provide have been enough to convince investors and customers to ignore the flaws. But it remains to be seen whether this trend will continue, and for how long.

tidings

Attracting investment in “emotional AI”: Julie explains how some VCs and corporations are gravitating toward sentiment analysis' more sophisticated sibling, “emotion AI,” and how this could be problematic.

Why Home Robots Are Still a Bad Thing: Brian explores why so many attempts at home robots have failed so spectacularly. He says it comes down to price, features and effectiveness.

Amazon Hires Covariant Founder On the topic of robots, Amazon last week hired the founders of robotics startup Covariant and “about a quarter” of the company’s staff. It also signed a non-exclusive license to use Covariant’s AI robot models.

NightCafe, the OG image generator: I introduced NightCafe, one of the first image generators and a marketplace for AI-generated content. It's still alive and kicking.

Midjourney learns about hardware. Midjourney, a competitor to NightCafe, is jumping into hardware. The company announced the news in a post on X, saying its new hardware team will be based in San Francisco.

SB 1047 passed: The California legislature just passed AI bill SB 1047. Max writes about why some people hope the governor won’t sign it.

Google launches election security tools. Google is gearing up for the U.S. presidential election by rolling out safeguards for more generative AI apps and services. As part of the restrictions, most of the company’s AI products will not respond to election-related topics.

Apple and Nvidia may invest in OpenAI: Nvidia and Apple are said to be in talks to contribute to OpenAI’s next funding round, which could value the company behind ChatGPT at $100 billion.

Research Paper of the Week

Do we need a game engine when we have AI?

Researchers at Tel Aviv University and Google’s AI R&D arm DeepMind last week previewed GameNGen, an AI system that can simulate the game Doom at up to 20 frames per second. Trained on massive amounts of footage of Doom gameplay, the model can effectively predict the next “game state” when a player “controls” a character in the simulation—a game that’s generated in real time.

DeepMind Doom
AI-generated Doom-like levels.
Image Source: Google

GameNGen isn’t the first model to do this. OpenAI’s Sora can simulate games including Minecraft, and a group of university researchers unveiled an Atari game-simulating AI earlier this year. (Other models like this range from World Models to GameGAN to Google’s Genie.)

However, GameNGen is one of the most impressive attempts at game simulation to date in terms of performance. The model is not without its major limitations, namely the graphical glitches and the inability to “remember” more than 3 seconds of gameplay (i.e. GameNGen can't actually create a functional game). However, it could be a step towards a completely new kind of game: procedurally generated games on steroids.

Model of the week

As my colleague Devin Koldway has written before, AI is taking over weather forecasting, from simple questions like “How long will this rain last?” to 10-day forecasts and century-long predictions.

One of the latest models to hit the scene is Aurora, a product of Microsoft’s AI research lab. Trained on a variety of weather and climate data sets, Aurora can be fine-tuned for specific forecasting tasks with relatively little data, Microsoft claims.

Microsoft Aurora
Image Source: Microsoft

“Aurora is a machine learning model that can predict atmospheric variables such as temperature,” Microsoft explains on the model’s GitHub page. “We offer three specialized versions: one for medium-resolution weather forecasting, one for high-resolution weather forecasting, and one for air pollution forecasting.”

Aurora’s performance appears to be quite good compared to other atmospheric tracking models (it can generate a five-day global air pollution forecast or a 10-day high-resolution weather forecast in less than a minute). But it’s not immune to the hallucinatory tendencies of other AI models. Aurora is prone to mistakes, and Microsoft warns that it “should not be used by people or businesses to plan operations.”

Sundries Bag

Last week, Inc. reported that Scale AI, an AI data labeling startup, was laying off a number of annotators (those who label the training data sets used to develop AI models).

There has been no official announcement as of press time, but one former employee told Inc. that hundreds of layoffs have occurred. (Scale AI denies this.)

Most annotators working for Scale AI are not directly employed by the company. Instead, they are hired by Scale subsidiaries or third-party companies, which often results in less stable employment. Labelers sometimes go to long hours without receiving work. Or, as has recently happened to contractors in Thailand, Vietnam, Poland, and Pakistan, they are unceremoniously kicked off Scale’s platform.

Of the layoffs last week, a Scale spokesperson told TechCrunch that the company hires contractors through a company called HireArt. “These individuals were employees of HireArt and received severance pay and COBRA benefits from HireArt through the end of the month. Fewer than 65 were laid off last week. We’ve built out this contract workforce and right-sized it as our operating model has evolved over the last nine months, and the layoffs in the U.S. are fewer than 500.”

It’s a little hard to parse out exactly what Scale AI means from this carefully worded statement, but we’re looking into it. If you’re a former employee of Scale AI or a recently laid-off contractor, please contact us in a way that feels comfortable to you.