California's AI bill, SB 1047, aims to prevent AI disasters, but Silicon Valley warns it will cause them.

UPDATE: The California Budget Commission passed SB 1047 on Thursday, August 15, with important amendments that change the bill. You can read it here.

Outside of science fiction, there’s no precedent for AI systems killing people or being used in large-scale cyberattacks. But some lawmakers want to put safeguards in place before bad actors can make that dystopian future a reality. A California bill known as SB 1047 seeks to stop real-world disasters caused by AI systems before they happen, and it’s set for a final vote in the state Senate in late August.

While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players big and small, including venture capitalists, large tech trade groups, researchers, and startup founders. While there are many AI bills flying around the country right now, California’s Safety and Security Innovation Frontier Artificial Intelligence Model Act has become one of the most controversial. Here’s why.

What Will SB 1047 Do?

SB 1047 seeks to prevent large-scale AI models from being used to cause “serious harm” to humanity.

The bill provides examples of “significant harm,” such as a malicious actor using an AI model to create a weapon that causes mass casualties, or directing someone to orchestrate a cyberattack that causes more than $500 million in damage (for comparison, the CrowdStrike outage is estimated to have caused more than $5 billion in damage). The bill would hold developers (i.e., the companies that develop the models) responsible for implementing sufficient safety protocols to prevent such outcomes.

Which models and companies are affected by this rule?

SB 1047’s rules only apply to the world’s largest AI models, which means models that cost at least $100 million and use 10^26 FLOPS during training. That’s a lot of compute, but OpenAI CEO Sam Altman said that’s how much it cost to train GPT-4. That threshold can be raised as needed.

Few companies have developed public AI products large enough to meet these requirements today, but tech giants like OpenAI, Google, and Microsoft are likely to do so soon. AI models (basically giant statistical engines that identify and predict patterns in data) have generally become more accurate as they grow, and many expect this trend to continue. Mark Zuckerberg recently said that the next Meta, Llama, will require 10x more compute, which falls under the purview of SB 1047.

For open source models and their derivatives, the bill determined that the original developer would be liable unless another developer spent three times as much as the original developer to create a derivative version of the original model.

The bill also requires safety protocols to prevent misuse of AI products, including an “emergency stop” button that shuts down the entire AI model. Developers must also create testing procedures to address risks posed by AI models, and hire a third-party auditor to evaluate AI safety practices annually.

The results must provide “reasonable assurance” that following these protocols will prevent significant harm. ~ No Of course, we cannot provide absolute certainty.

Who will implement this? And how?

A new agency in California, the Frontier Model Division (FMD), will oversee the rules. Any new public AI model that meets the thresholds of SB 1047 will have to be individually certified with a written copy of the safety protocol.

FMD is governed by a five-member committee appointed by the California Governor and Legislature, including representatives from the AI ​​industry, the open source community, and academia. The committee advises the California Attorney General on potential violations of SB 1047 and issues guidance on safe practices to AI model developers.

The developer’s CTO must submit an annual certification to FMD to assess the potential risks of the AI ​​model, the effectiveness of the safety protocols, and a description of how the company complies with SB 1047. Similar to a breach notification, if an “AI safety incident” occurs, the developer must report it to FMD within 72 hours of becoming aware of the incident.

If a developer fails to comply with these provisions, SB 1047 allows the California Attorney General to file a civil suit against the developer. For a model that costs $100 million to train, the fine can be as high as $10 million for the first violation and as high as $30 million for subsequent violations. This fine rate increases as the AI ​​model becomes more expensive.

Finally, the bill includes protections for whistleblowers if they attempt to disclose information about unsafe AI models to the California Attorney General.

What do supporters say?

California State Senator Scott Winner, who authored the bill and represents San Francisco, told TechCrunch that SB 1047 is an attempt to learn from past policy failures related to social media and data privacy and protect citizens before it's too late.

“We've been waiting for the sun to rise and rubbing our hands and thinking about technology,” Winner said. “Let's not wait for something bad to happen. Let's just get ahead of it.”

Even if a company trains a $100 million model in Texas or France, as long as it does business in California, it will be subject to SB 1047. Wiener said Congress has done “surprisingly little to legislate on technology in the last 25 years,” so he thinks California should set a precedent here.

“We’ve met with all the big labs,” Wiener said when asked if he had met with OpenAI and Meta regarding SB 1047.

The bill was sponsored by two AI researchers, Geoffrey Hinton and Yoshua Bengio, sometimes called the “godfathers of AI.” The two belong to a faction of the AI ​​community that is concerned about the dangerous doomsday scenarios that AI technology could bring. These “AI doomsayers” have been around for some time in the research community, and SB 1047 could enact some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, penned an open letter in May 2023 asking the world to prioritize “mitigating the risk of extinction from AI as seriously as we do pandemics or nuclear war.”

“This is in the long-term interest of California industry and American industry more generally, as major safety incidents are likely to be the biggest barriers to further progress,” Dan Hendrix, executive director of the Center for AI Safety, said in an email to TechCrunch.

Hendrix’s motives have been called into question recently. In July, he publicly launched Gray Swan, a startup that, according to a press release, is building “a tool to help businesses assess the risks of AI systems.” He sold his stake in Gray Swan amid criticism that Hendrix’s startup would benefit if the bill passed, as SB 1047 requires developers to hire one of the auditors.

“I pulled out to send a clear signal,” Hendrix told TechCrunch in an email. “If billionaire VCs who oppose common-sense AI safety want to show that their motivations are pure, let them do it.”

What do opponents say?

Several prominent figures in Silicon Valley are speaking out against SB 1047.

Hendrycks’ “anti-billionaire VC” probably refers to a16z, a venture firm founded by Marc Andreessen and Ben Horowitz, which has been a vocal opponent of SB 1047. In early August, the firm’s chief legal officer, Jaikumar Ramaswamy, wrote a letter to Sen. Wiener arguing that the bill would “burden startups with arbitrary and shifting limits,” which would have a chilling effect on the AI ​​ecosystem. As AI technology advances, it will become more expensive, which will lead to more startups hitting the $100 million threshold and falling under SB 1047. a16z says some startups are already paying that amount to train their models.

Fei-Fei Li, known as the godmother of AI, broke her silence on SB 1047 in early August, writing in a Fortune column that the bill would “harm our budding AI ecosystem.” Li, a Stanford graduate and considered a pioneer in AI research, also founded an AI startup called World Labs in April, valued at $1 billion and backed by a16z.

She joined influential AI scholars like Stanford researcher Andrew Ng, who called the bill an “assault on open source” during a speech at a Y Combinator event in July. The open source model, like all open source software, poses additional risks to creators because it can be easily modified and distributed for arbitrary and potentially malicious purposes.

Meta’s Chief AI Scientist Yann LeCun posted on X that SB 1047 would hurt research efforts and is based on the fantasy of “existential risk” promoted by a few delusional think tanks. Meta’s Llama LLM is one of the most prominent examples of an open source LLM.

Startups aren’t happy with the bill either. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a San Francisco-based AI startup hub, worries that SB 1047 will destroy his ecosystem. He argues that bad actors who cause serious harm should be punished, not AI labs that develop and deploy their technology openly.

“At the heart of the bill is a deep confusion that LLMs can have different levels of risk capacity,” Nixon said. “I think it’s more likely that all models have risk capacity as defined in the bill.”

But Big Tech, the very focus of the bill, is also upset about SB 1047. The Chamber of Progress, a trade group representing Google, Apple, Amazon, and other Big Tech giants, published an open letter opposing the bill, saying it would limit free speech and “crowd out innovation in California.” Last year, Google CEO Sundar Pichai and other tech executives backed the idea of ​​federal regulation of AI.

U.S. Rep. Ro Khanna, who represents Silicon Valley, issued a statement Tuesday opposing SB 1047, saying the bill would be “ineffective, penalize individual entrepreneurs and small businesses, and harm California’s spirit of innovation.”

Silicon Valley has traditionally been reluctant to let California set such sweeping tech regulations. In 2019, Big Tech played a similar card when another state privacy bill, the California Consumer Privacy Act, threatened to change the tech landscape. Silicon Valley opposed the bill, and months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for federal privacy legislation instead.

What happens next?

On August 15, SB 1047 will be sent to the California Senate floor with all approved amendments. According to Winner, the California Senate is where the bill “lives or dies.” Given the overwhelming support from lawmakers so far, it is expected to pass.

Anthropic submitted several proposed amendments to SB 1047 in late July, which Wiener said he and the California Senate Policy Committee are actively considering. Anthropic is the first cutting-edge AI model developer to publicly state its intention to work with Wiener on SB 1047, despite not currently supporting the bill. This was largely seen as a victory for the bill.

Anthropic’s proposed changes include eliminating FMD, reducing the attorney general’s ability to sue AI developers before harm occurs, and eliminating SB 1047’s whistleblower protections. Wiener is generally positive about the amendments, but says they need approval from several Senate policy committees before being added to the bill.

If SB 1047 passes the Senate, the bill would go to California Gov. Gavin Newsom’s desk, who would make a final decision on whether to sign the bill by the end of August. Wiener said he has not spoken to Newsom about the bill and does not know his position.

The bill won’t go into effect immediately, as the FMD isn’t scheduled to be formed until 2026. Moreover, even if it does pass, it’s very likely that it will face legal challenges by then, perhaps from some of the groups that are already vocal about the issue.

Correction: This article originally referenced a previous draft of SB 1047’s language regarding who is responsible for a fine-tuned model. SB 1047 now says that a derivative model developer is only responsible for a model if they spend three times as much as the original model developer spent on training it.