Expand / California Governor Gavin Newsom will soon have to decide whether to sign SB-1047.
Ray Chavez/Mercury News via Getty Images
A controversial bill aimed at enforcing safety standards for large-scale artificial intelligence models has passed the California Assembly by a vote of 45 to 11. Following a 32-1 vote in the state Senate in May, SB-1047 now has just one more procedural vote in the state Senate before it can reach Gov. Gavin Newsom's desk.
As we’ve previously explored in detail, SB-1047 requires creators of AI models to implement a “kill switch” that would be activated if their models begin to pose “emerging threats to public safety and security,” particularly in cases where there is “limited human oversight, intervention, or oversight.” Some have criticized the bill for focusing on outlandish risks from imagined future AI, rather than real, present-day harms from AI use cases like deep fakes and misinformation.
In announcing the bill's passage on Wednesday, bill sponsor and state Sen. Scott Weiner cited support from prominent figures in the AI industry, including Geoffrey Hinton and Yoshua Bengio (who also signed a statement last year warning of an “extinction risk” from rapidly developing AI technologies).
In a recent op-ed published in Fortune magazine, Bengio said the bill “outlines the bare minimum to effectively regulate cutting-edge AI models,” and that by focusing on larger models (those costing more than $100 million to train), it would avoid impacting smaller startups.
“It's not enough for companies to grade their own assignments and simply issue nice-sounding guarantees,” Bengio wrote. “This is unacceptable for other technologies, like pharmaceuticals, aerospace, or food safety. Why should AI be treated any differently?”
But in a separate Fortune op-ed earlier this month, Stanford University computer science professor and AI expert Fei-Fei Li argued that the “well-intentioned” bill “would have significant unintended consequences not just for California but for the entire country.”
Because the bill would place liability on the original developers who modified the models, Lee argued, “developers will be forced to back off and take defensive action. This would have a major impact on academic research by limiting the open-source sharing of AI weights and models,” Lee wrote.
What will Newsom do?
A group of California business leaders sent an open letter to Governor Newsom on Wednesday urging him to reject a “fundamentally flawed” bill that “improperly regulates development, not model abuse.” The bill would “create burdensome compliance costs” and “chill investment and innovation through regulatory ambiguity,” the groups said.
Governor Newsom spoke about AI issues at a symposium in May.
If the Senate approves the Assembly bill as expected, Newsom will have until September 30 to decide whether to sign the bill into law. If the governor vetoes it, the Legislature can override the veto with a two-thirds majority in both houses (which seems likely given the supermajority in favor of the bill).
“If we over-regulate, if we over-indulge, if we chase the hot buttons, we may put ourselves in a dangerous position,” Newsom said at a symposium at the University of California, Berkeley in May.
At the same time, Newsom said those concerns about overregulation are commensurate with those he's hearing from AI industry leaders. “It's a very different environment than when the inventors, the godmothers and godfathers of this technology are saying, 'Help us, we want you to regulate this,'” he said at the symposium. “It's an interesting environment when they're scrambling to educate people and basically saying, 'We don't know what we've done, but we have to do something about it.'”