August 8, 2024, 3:00 AM PST
California is a global leader in artificial intelligence, which means it's expected to help figure out how to regulate it. The state is considering multiple bills to that end, but none have garnered as much attention as Senate Bill 1047. Introduced by Senator Scott Wiener (D-San Francisco), the bill would require companies that produce the largest AI models to test and correct those models to ensure they don't facilitate serious harm. Is this a necessary step to keep AI accountable, or is it going too far? Simon Rust, co-founder of AI-powered companies, and Paul Lecas, public policy director at the Software & Information Industry Association, offer their thoughts.
This bill would help keep technology safe without stifling innovation.
Simon Last
As a co-founder of an AI-powered company, I've witnessed incredible advances in artificial intelligence. I design products that use AI every day, and it's clear to me that these systems will only get more powerful in the coming years. Along with advances in science and medicine, we'll see huge advances in creativity and productivity.
But as AI systems become more sophisticated, we must consider the risks they pose: Without proper precautions, AI could cause serious harm on an unprecedented scale, including cyberattacks on critical infrastructure, the development of chemical, nuclear and biological weapons, and automated crime.
California's SB 1047 strikes a balance between protecting public safety from such harms and supporting innovation, focusing on commonsense safety requirements for the small number of companies developing the most powerful AI systems. The bill includes whistleblower protections for employees who report safety concerns at AI companies, and importantly, the bill is designed to support California's incredible startup ecosystem.
SB 1047 only affects companies building next-generation AI systems that cost more than $100 million to train. Based on industry best practices, the bill requires safety testing and mitigation of anticipated risks before these systems are released, as well as the ability to turn off the systems in an emergency. If the AI causes multiple casualties or damages of at least $500 million, state attorneys general can sue companies to hold them liable.
These safety standards would apply to the AI ”foundation models” on which startups develop their specialized products. This approach would more effectively mitigate risk across the industry without placing the burden on smaller developers. As a startup founder, I am confident that this bill will not hinder our ability to develop and grow.
Some critics argue that regulation should only focus on harmful uses of AI, not the underlying technology. But this view is wrong because cyberattacks and biological weapons use, for example, are already illegal. SB 1047 provides what's missing: a way to prevent harm before it occurs. Product safety testing is standard in many industries, including manufacturers of cars, airplanes, and prescription drugs. Builders of the largest AI systems should be held to a similar standard.
Some have argued that this bill will drive companies out of the state. That's nonsense. California has little talent or capital supply, and SB 1047 will not change the factors that attract companies to do business in California. Also, the bill applies to foundation model developers doing business in California, regardless of where they are headquartered.
Tech leaders including Meta's Mark Zuckerberg and OpenAI's Sam Altman have debated AI regulation in Congress, warning of the devastating impacts of the technology and even calling for regulation, but expectations for congressional action are low.
With 32 of Forbes' top 50 AI companies based in California, our state has a great responsibility to help the industry thrive. SB 1047 provides a framework for startups to thrive alongside larger companies while prioritizing public safety. By making smart policy choices now, state lawmakers and Governor Gavin Newsom can solidify California's position as a global leader in the advancement of responsible AI.
Simon Last is the San Francisco-based co-founder of Notion.
These nearly impossible standards will cost California its edge in AI
Paul Lecas
California is the birthplace of American innovation. Over the years, many information technology companies, including those represented by my association, have benefited Californians by developing new products for consumers, improving public services, and revitalizing the economy. Unfortunately, bills pending in the California Assembly threaten to undermine our best innovators and target cutting-edge or highly advanced AI models.
The bill goes far beyond its stated focus of ensuring California can benefit from this technology while addressing real concerns about the safety of these models. Rather than targeting foreseeable harms, like using AI for predictive policing based on biased historical data, or holding accountable those who use AI for malicious purposes, SB 1047 would ultimately prohibit developers from releasing AI models that can adapt to meet the needs of California consumers and businesses.
SB 1047 accomplishes this by effectively forcing those at the forefront of new AI technologies to anticipate, mitigate, and prevent all the ways their models could be misused—something that simply isn’t possible, especially since there are no universally accepted technical standards for measuring and mitigating the risks of cutting-edge models.
If SB 1047 becomes law, California consumers will lose access to AI tools they find helpful — like the production of a prescription drug being halted because someone has taken it illegally or overdosed — as well as AI tools designed to protect Californians from malicious activity enabled by other AI.
To be clear, concerns about SB 1047 do not reflect the notion that AI should proliferate without meaningful oversight. There is bipartisan agreement that guardrails around AI are necessary to reduce the risk of misuse and address foreseeable harms to public health and safety, civil rights, and other areas. States have led the way in enacting legislation to thwart the misuse of AI. For example, Indiana, Minnesota, Texas, Washington, and California have enacted laws banning the creation of deepfakes that depict intimate images of identifiable individuals and limiting the use of AI in election advertising.
Congress is also considering guardrails to protect elections, privacy, national security, and other concerns while preserving America's technological advantage. Indeed, coordinated oversight at the federal level, without fear of civil and criminal liability, would be best, as is being pursued through the AI Safety Lab launched at the National Institute of Standards and Technology. This approach recognizes that safety in the Frontier model requires vast resources that even California cannot muster.
So while it is essential that elected leaders take steps to protect consumers, SB 1047 goes too far. The bill would force startups and established companies to weigh nearly impossible standards for compliance against the value of doing business elsewhere. It could potentially lose California's edge in AI innovation and strengthen the position of non-U.S. AI developers who are not held to the same principles of transparency and accountability, inevitably putting the privacy and security of U.S. consumers at risk.
Paul Lecas is director of international public policy and government relations at the Software and Information Industry Association in Washington.