California Governor Gavin Newsom vetoes landmark AI safety bill.

The bill would have required the most advanced AI models to undergo safety testing.

Developers had to make sure their technology included a so-called “kill switch.” This allows organizations to isolate and effectively turn off AI systems if they pose a threat.

It would also have mandated formal oversight of the development of so-called “frontier models,” or the most powerful AI systems.

“This bill does not consider whether artificial intelligence systems are deployed in high-risk environments, involve critical decision-making, or involve the use of sensitive data,” Newsom said in a statement., external.

“Instead, this bill imposes strict standards on even the most basic functions as long as large systems deploy them,” he added.

At the same time, Prime Minister Newsom announced plans to protect the public from the risks of AI and asked leading experts to help develop safeguards for the technology.

Over the past few weeks, Newsom has also signed 17 bills, including one aimed at cracking down on misinformation and so-called deepfakes, which include images, video or audio content created using generative AI.

California is home to many of the world’s largest and most advanced AI companies, including OpenAI, maker of ChatGPT.

The state’s role as a hub for many of the world’s largest technology companies means any legislation regulating the sector could have a significant impact on the industry nationally and globally.

Mr. Wiener said., external The decision to reject the bill leaves AI companies “free from binding restrictions from U.S. policymakers, especially given Congress’s continued paralysis in regulating the technology industry in a meaningful way.”

Efforts in Congress to introduce safeguards for AI have stalled.

OpenAI, Google and Meta are among several major technology companies that have expressed opposition to the bill and warned it would hinder the development of important technologies.

“AI is still in its infancy as a general-purpose technology, so it would be premature to limit the technology itself as proposed,” said Wei Shen, senior analyst at Counterpoint Research.

“Instead, it would be more advantageous to regulate specific application scenarios that could cause harm in the future,” she added.