
California’s AI disaster prevention bill, SB 1047, faced significant opposition from many Silicon Valley political parties. Today, California lawmakers caved to the pressure a bit, adding several amendments proposed by AI company Anthropic and other opponents.
The bill passed the California Budget Committee on Thursday, according to TechCrunch, a major step toward becoming law with several key changes, Wiener's office said.
“We have accepted several of the very reasonable amendments proposed, and believe they address the core concerns expressed by Anthropic and many others in the industry,” Senator Wiener said in a statement to TechCrunch. “These amendments build on the significant changes I previously made to SB 1047, which are intended to accommodate the unique needs of the open source community, a critical source of innovation.”
SB 1047 still aims to prevent large-scale AI systems from killing many people or causing cybersecurity incidents that cost more than $500 million, and to hold developers accountable. But the bill now gives California less authority to hold AI labs accountable.
What does SB 1047 do now?
Most notably, the bill would no longer allow the California attorney general to sue AI companies for negligent safety practices before a catastrophic event occurs, which was Anthropic’s proposal.
Instead, the California attorney general can file for a preliminary injunction to stop a company from engaging in certain activities it deems risky, and can still sue AI developers if their AI models cause catastrophic events.
SB 1047 also does not create the Frontier Model Division (FMD), a new government agency that was previously included in the bill. However, the bill still creates the Frontier Models Commission, the centerpiece of the FMD, and places it within an existing government agency. In fact, the commission is now larger, with nine members instead of five. The Frontier Models Commission still sets computing thresholds for applicable models, issues safety guidelines, and issues regulations for the Office of the Comptroller.
Senator Wiener also amended SB 1047 to no longer require AI labs to submit safety test results certifications “under penalty of perjury.” Now, these AI labs will simply be required to submit a public “statement” outlining their safety practices, but the bill no longer imposes criminal liability.
SB 1047 now includes more lenient language on how developers ensure the safety of their AI models. The bill now requires developers to take “reasonable care” to ensure that their AI models do not pose a substantial risk of causing a disaster, instead of the “reasonable assurance” that the bill previously required.
Additionally, lawmakers added protections for open-source fine-tuning models. If someone spends less than $10 million to fine-tune a covered model, they are explicitly not considered a developer under SB 1047. Liability still falls on the original, larger developer of the model.
Why are all these changes happening now?
The bill passed the California legislature relatively easily, despite significant opposition from U.S. lawmakers, prominent AI researchers, Big Tech, and venture capitalists. These amendments are likely to appease SB 1047 opponents and give Governor Newsom a less controversial bill that he can sign into law without losing support from the AI industry.
Newsom has not commented officially on SB 1047, but he has previously expressed his commitment to AI innovation in California.
Anthropic told TechCrunch that it was reviewing the changes to SB 1047 before making its position known. Not all of the amendments Anthropic proposed were adopted by Senator Wiener.
“The goal of SB 1047 is, and always has been, to advance AI safety while allowing innovation across the ecosystem,” said Nathan Calvin, senior policy counsel at the Center for AI Safety Action Fund. “The new amendments will support that goal.”
Still, these changes are not enough to appease SB 1047’s staunch critics. While the bill is significantly weaker than it was before these amendments, SB 1047 still holds developers accountable for the risks of their AI models. The core fact about SB 1047 is that it is not universally supported, and these amendments do little to address that.
“The edits are cosmetic,” Martin Casado, a general partner at Andreessen Horowitz, said in a tweet. “They do not address the real issues or criticisms of the bill.”
In fact, shortly after SB 1047 passed on Thursday, eight U.S. lawmakers representing California wrote a letter to Governor Newsom urging him to veto SB 1047. They wrote that the bill “would be bad for our state, our startup community, scientific development, or even protections against potential harms associated with AI development.”
What's next?
SB 1047 now heads to the California State Capitol for a final vote. If it passes there, it would have to be sent back to the California Senate for a vote due to this latest amendment. If either passes, it would then head to Governor Newsom’s desk, where it could be vetoed or signed into law.








