A California bill that would be the first to establish safeguards for the nation's largest artificial intelligence systems passed a key vote on Wednesday. Aimed at mitigating potential risks posed by AI, the proposal would require companies to test models and publicly disclose safety protocols to prevent models from being manipulated to take down the state's power grid or be used to create chemical weapons. Experts say the industry's rapid advances could make such scenarios possible in the future.
The bill narrowly passed the state Assembly on Wednesday and now faces a final vote in the state Senate. It has already passed once and will then be sent to the governor's desk to be signed, but the governor has not made his position on the bill clear. Gov. Gavin Newsom will then have until the end of September to decide whether to sign the bill, veto it, or let it become law without his signature. The governor was mum on the bill earlier this summer but warned against overregulating AI.
Supporters said the bill would establish some of the first long-awaited safety standards for large-scale AI models in the U.S. The bill targets systems that require more than $100 million in data for training — a bar that no current AI model meets.
The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and technology companies, including Open AI, Google and Meta, the parent company of Facebook and Instagram, which argue that safety regulations should be established by the federal government and that California's law targets developers, not those who use and misuse AI systems.
Wiener said his bill took a “light touch” approach.
“Innovation and safety can go hand in hand, and California is leading the way,” he said in a statement after the vote.
Wiener's proposal is one of dozens of AI bills introduced by the California Legislature this year aimed at building public trust, fighting algorithmic discrimination and banning deep fakes related to elections and pornography. As AI increasingly influences Americans' daily lives, state lawmakers have sought to strike a balance between curbing the technology and its potential risks without stifling a burgeoning local industry.
California, home to 35 of the world's top 50 AI companies, has been an early adopter of AI technology and could soon be able to deploy generative AI tools to address things like highway congestion and road safety.
Innovation and safety can go hand in hand, and California is leading the way. Senator Scott Wiener
Elon Musk, the former owner of Twitter's X and founder of xAI, voiced his support for the proposal this week but said it was a “tough decision.” X runs its own chatbot and image-generating tool, Grok, but it has fewer safeguards than other prominent AI models.
“For over 20 years I have advocated for regulating AI, just as we do for any product or technology that poses potential risks to the public,” Musk tweeted.
Several California congresswomen also opposed the bill, with former House Speaker Nancy Pelosi calling it “well-intentioned but ill-informed.”
The Congress for Progress, a left-leaning trade group funded by Silicon Valley, said the bill was “based on science fiction fantasies of what AI will be like.”
“This bill has more in common with Blade Runner or The Terminator than it does with the real world,” Todd O'Boyle, California's senior director for technology policy, said in a statement after Wednesday's vote. “A theoretical scenario should not cripple a key economic sector in California.”
The bill also has support from Anthropik, an AI startup backed by Amazon and Google. Wiener amended the bill earlier this month to include some of the company's proposals. The current version of the bill removes perjury penalties, limits state attorneys general's power to sue violators, and narrows the accountability of the new AI regulator.
In a letter to Governor Newsom, Antropic said the bill is crucial to preventing devastating misuse of powerful AI systems, and that its “benefits likely outweigh the costs.”
He also slammed critics who earlier this week dismissed the potential catastrophic risks from powerful AI models as unrealistic, saying: “If they really think the risks are bogus, then they should have no problem with this bill.”