A controversial bill aimed at protecting Californians from artificial intelligence apocalypse is causing an uproar in the tech industry. The bill passed a key committee this week but was amended to make it more palatable to Silicon Valley.
SB 1047, introduced by state Sen. Scott Wiener (D-San Francisco), is expected to be introduced in the state Assembly later this month. If it passes the Legislature, Gov. Gavin Newsom would have to decide whether to sign or veto the landmark bill.
Supporters of the bill say it would create guardrails to prevent rapidly advancing AI models from causing catastrophic accidents, like shutting down the power grid without warning. They worry that the technology is developing faster than its human developers can control it.
Lawmakers aim to encourage developers to use technology responsibly and give state attorneys general the power to impose penalties if there is an imminent threat or harm. The bill also requires developers to directly shut down AI models they control if things go wrong.
But some tech companies, including Facebook parent Meta Platforms, and influential politicians, such as Rep. Ro Khanna (D-Fremont), argue the bill will stifle innovation. Some critics say the bill focuses on apocalyptic, far-future scenarios rather than more immediate concerns like privacy and misinformation, but there are other bills that address those issues.
SB 1047 is one of roughly 50 AI-related bills introduced in state legislatures amid growing concerns about the technology's impact on jobs, misinformation and public safety. As politicians work to enact new laws to put guardrails on the burgeoning industry, some businesses and talent are suing AI companies in hopes that courts can set ground rules.
Wiener, who represents San Francisco, home to AI startups OpenAI and Anthropic, is a central figure in this debate.
On Thursday, he made significant changes to the bill that some believe weaken it but improve its chances of passing in the Legislature.
The amendments removed perjury penalties from the bill and changed the legal standard for developers regarding the safety of advanced AI models.
Additionally, plans to create a new government agency called the Frontier Modeling Department are no longer in the works. The original version of the bill required developers to submit safety measures to the new department. In the new version, developers will instead submit safety measures to the attorney general.
“I think some of these changes will increase the chances of the bill passing,” said Christian Gross, a professor of political science and public policy at the University of Southern California.
Some tech insiders, including the Center for AI Safety and Geoffrey Hinton, the so-called “godfather of AI,” support the bill, but others worry it could hurt California's burgeoning industry.
Eight California House members — Khanna, Reps. Zoe Lofgren (D-San Jose), Anna G. Eshoo (D-Menlo Park), Scott Peters (D-San Diego), Tony Cárdenas (D-Pacoima), Ami Vela (D-Elk Grove), Nanette Díaz Barragan (D-San Pedro) and Lou Correa (D-Santa Ana) — sent a letter to Governor Newsom on Thursday urging him to veto the bill if it passes the state Legislature.
“[Weiner]is under real pressure in San Francisco between experts in the field who are telling him and others in California that AI is dangerous if it's not regulated, and people who are getting paid to do AI, who are doing cutting-edge research,” Gross said. “This could be a real flashpoint for him, for his career, for better or worse.”
Some big tech companies have said they support regulation but oppose Weiner's approach.
“While we agree with Wiener's description and goals of the bill, we remain concerned about the impact this bill will have on AI innovation in California, particularly open source innovation,” Meta state policy manager Kevin McKinley said during a meeting with Los Angeles Times editorial board members last week.
Meta is one of the companies with a collection of open-source AI models, called Llama, that developers can build their own products on top of. Meta released Llama 3 in April, and it has already been downloaded 20 million times, the tech giant said.
Mehta declined to discuss the new amendments. McKinley said last week that SB 1047 “is actually a very difficult bill to redline and amend.”
A spokesman for Governor Newsom said his office typically does not comment on pending bills.
“The governor will evaluate this bill on its merits once it reaches his desk,” spokeswoman Izzy Gurdon said in an email.
Anthropik, the San Francisco-based AI startup known for its AI assistant Claude, said it would support the bill if it were amended. Hank Dempsey, Anthropik's state policy director, suggested changes in the bill, such as shifting the focus from upfront enforcement to holding companies accountable for causing disasters.
Wiener said the proposed amendments took into account Anthropic's concerns.
“We can drive both innovation and safety,” Wiener said in a statement. “The two are not mutually exclusive.”
It's unclear whether the amendment will change Anthropik's position on the bill: Anthropik said in a statement Thursday that it would consider new “bill language once it is released.”
Russell Wald, associate director of Stanford University's HAI, which aims to advance AI research and policy, said he still opposes the bill.
“The latest amendments appear to prioritize appearance over substance,” Wald said in a statement. “They appear less controversial to appease a few large AI companies, but do little to address the real concerns of academic institutions and the open source community.”
It's a delicate balance for lawmakers trying to consider concerns about AI while also supporting the state's tech sector.
“What many of us are trying to do is find a regulatory environment that allows those guardrails to exist without stifling the innovation and economic growth that comes with AI,” Wicks said after the committee meeting on Thursday.
Times reporter Anabel Sosa contributed to this report.