A bill aimed at regulating powerful artificial intelligence models is moving through the California state legislature, sparking outcry that the bill could kill the technology it targets.
“With Congress at an impasse over AI regulation, California must act to get ahead of the foreseeable risks posed by rapidly evolving AI and promote innovation,” said Sen. Scott Wiener, D-San Francisco, the bill's sponsor.
But critics, including Democrats in the U.S. Congress, argue that threatening punitive measures against developers in the emerging sector could stifle innovation.
“The view of many in Congress is that SB 1047 is well-intentioned but poorly informed,” House Democrat Nancy Pelosi of California said in a statement, noting that party leaders had conveyed their concerns to Wiener.
“We want California to lead the way on AI in a way that protects consumers, data, intellectual property and more, but SB 1047 does more harm than good to get there,” Pelosi said.
Pelosi noted that Fei-Fei Li, a computer science professor at Stanford University who has been dubbed the “godmother of AI” for her stature in the field, is among those opposing the bill.
– Harm or Help? –
The bill, called the “Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act,” doesn't fix the problems it's meant to fix and “will severely harm AI research institutions, small tech companies, and the open source community,” Lee wrote on X earlier this month. Small tech companies are startups, small businesses, researchers, and entrepreneurs.
Weiner said the bill aims to ensure the safe development of large-scale AI models by establishing safety standards for developers of systems that can cost more than $100 million to train.
The bill would require developers of large-scale, “cutting-edge” AI models to take precautions such as testing them before deployment, simulating hacker attacks, implementing cybersecurity measures, and protecting whistleblowers.
Recent amendments to the bill include replacing criminal penalties for violations with civil penalties, such as fines.
Wiener argues that AI safety and innovation are not mutually exclusive, and that amendments to the bill have addressed some of critics' concerns.
OpenAI, the developer of ChatGPT, also opposes the bill, saying it is concerned about a confusing patchwork of AI regulations across US states and would prefer regulation at a national level.
At least 40 states have introduced bills to regulate AI this year, and six have adopted resolutions or enacted laws targeting the technology, according to the National Conference of State Legislatures.
OpenAI said the California bill could also drive innovators away from the state, home to Silicon Valley.
But Anthropic, another generative AI company that could be affected by the bill, says that after some welcome amendments, the bill has more benefits than drawbacks.
The bill also has prominent supporters in the AI community.
“Powerful AI systems offer incredible possibilities, but they also carry very real risks that need to be taken extremely seriously,” said Geoffrey Hinton, a computer scientist called the “godfather of AI” in a Fortune magazine opinion piece quoted by Wiener.
“SB 1047 takes a very sensible approach to balancing these concerns.”
Hinton said having “substantive” AI regulation is key, and California is a natural place to start because it has become a launch pad for AI technology.
Meanwhile, professors and students at Caltech are urging people to sign a letter opposing the bill.
“We believe this bill poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on AI development,” Anima Anandkumar, a professor at the California Institute of Technology, told X.
GC/ARP/ACB