A controversial bill that would regulate how AI models are developed and trained is inching closer to becoming law in California, frustrating many in the field.
California's Senate Bill 1047 would require AI companies developing models costing more than $100 million to build robust safety frameworks into those models.
The technology industry, many of whose companies are based in Silicon Valley, is reportedly debating how the bill will affect their operations.
SB 1047 would require AI developers to include kill switches, conduct annual audits for safety compliance, and prohibit the creation, use, or distribution of potentially dangerous models.
Elon Musk, developer of the AI platform Grok, which has recently been criticized for spreading false information, has voiced his support for the bill.
“This is a tough call and it will upset some people, but all things considered, I think California should probably pass SB1047, the AI Safety Act,” Musk said in a post on X on Monday.
The billionaire tech entrepreneur also claimed he has been a widespread advocate of AI regulation for about two decades, and said he has called for stricter regulatory oversight.
But there are also strong opponents of the bill, including OpenAI, a company co-founded by Musk.
The San Francisco tech company behind the popular language-learning model ChatGPT sent a letter last week to the bill's author, Rep. Scott Wiener (D-San Francisco), arguing that the bill would undermine Silicon Valley's ability to become a global leader in AI.
Andrew Ng, former head of Google's deep learning AI research project Deep Brain, also spoke out against the bill in June, arguing that it would “hold builders of large-scale AI models liable when someone uses their models.”
“I am deeply concerned about California's bill, SB-1047,” Ng tweeted at the time. “This is a lengthy, complicated bill with many sections calling for safety assessments, model shutdown capabilities, and more.”
If the bill becomes law, AI developers would have to follow five key rules, including being able to shut down their models quickly and creating a written safety and security plan. Developers would also have to keep an unredacted copy of this safety plan for as long as the model is available, plus five years, and keep records of any updates.
Beginning Jan. 1, 2026, developers will be required to hire an independent auditor annually to review compliance with the law and keep a full audit report on file for the same period as the safety plan.
Upon request, developers will be required to provide the Attorney General with access to safety plans and audit reports. Additionally, developers will be prohibited from using or publishing their models for commercial or public purposes if there is a significant risk of causing serious harm.
The bill has now cleared a key congressional committee and is expected to be voted on by the full House later this week. The Senate already passed it with strong support in May.
If the Legislature approves it, the bill will go to Gov. Gavin Newsom on Sept. 30 to decide whether to veto it or enact it into law.
Editor: Sebastian Sinclair
Generally intelligent newsletter
A weekly AI journey narrated by generative AI model Gen.