Few places in the world stand to benefit more from a thriving AI industry than California, and none have as much to lose if public trust in the industry suddenly collapses.
In May, the California Senate passed SB1047, an AI safety bill, by a vote of 32-1, aimed at ensuring the safe development of large-scale AI systems through clear, predictable, and common-sense safety standards. The bill is expected to be voted on by the state Assembly this week, and if signed into law by Governor Gavin Newsom, it will be a major step in protecting Californians and the state's burgeoning AI industry from malicious use.
Late Monday, Elon Musk surprised many by expressing his support for the bill in a post on X. “This is a tough call and it will anger some people, but I think that all things considered, California should probably pass SB 1047, the AI Safety Act,” he wrote. “For over 20 years, I have been an advocate for regulating AI, as well as any product or technology that poses a potential risk to the public.”
This post came a few days after I spoke with Musk about SB 1047. Unlike other corporate leaders who are often hesitant, consulting with their PR teams and lawyers before taking a position on safety legislation, Musk did not. After I explained the importance of the bill, he asked for a review of the text to ensure it was fair and not open to abuse. He voiced his support the next day. This swift decision-making process is a testament to Musk's longtime advocacy for responsible AI regulation.
Last winter, Senator Scott Weiner, the bill's author, reached out to the Center for AI Safety (CAIS) Action Fund seeking technical proposals and co-sponsors. As CAIS founder, my commitment to innovative technology that impacts public safety is a cornerstone of our mission. To sustain innovation, we must anticipate potential pitfalls, because prevention is better than cure. Recognizing the groundbreaking nature of SB 1047, we were happy to collaborate and have been advocating for its adoption ever since.
Read more: Exclusive: California bill proposes regulating AI at the state level
For the most advanced AI models, large companies are required to test for dangers, implement safeguards, ensure shutdown capabilities, protect whistleblowers, and manage risk. These measures are intended to prevent cyberattacks on critical infrastructure, bioengineering viruses, or other malicious activity that could cause widespread destruction and mass casualties.
Antropic recently warned that AI risks could emerge in “as little as one to three years,” pushing back against critics who say safety concerns are imaginary. Of course, if these risks are truly imaginary, developers need not fear liability. Moreover, developers have pledged to address these issues, which is consistent with President Joe Biden's recent executive order, which was reaffirmed at the 2024 AI Seoul Summit.
Enforcement is intentionally simple, allowing the California Attorney General to act only in extreme cases. The new model has no licensing requirements, does not punish honest mistakes, and does not criminalize open-sourcing, the practice of making software source code freely available. The bill was not drafted by big tech companies or people focused on far-future scenarios. The bill is intended to prevent cutting-edge labs from rushing to release their most capable models without taking caution and neglecting important safeguards.
Like most AI safety researchers, I believe in the potential of AI to bring enormous benefits to society and have a deep interest in preserving that potential, as does California, a global leader in AI. This shared concern is why state politicians and AI safety researchers are so enthusiastic about SB 1047. History teaches us that a catastrophe like the Three Mile Island nuclear accident on March 28, 1979, can set back a thriving industry by decades.
Regulators overhauled nuclear safety standards and protocols in response to the partial reactor meltdowns. These changes increased the cost and operational complexity of nuclear power plants as operators invested in new safety systems and followed rigorous oversight. Regulatory challenges made nuclear energy less attractive and halted its expansion for the next 30 years.
The Three Mile Island accident led to increased reliance on coal, oil, and natural gas. It is often said to have been a huge missed opportunity to move towards a more sustainable and efficient global energy infrastructure. While it is unclear whether the accident could have been avoided with stricter regulations, it is clear that one accident can have a significant impact on public perception and hinder the long-term potential of an entire industry.
Some may be skeptical, believing that any government action against industry will be inherently detrimental to business, innovation, and the competitiveness of a state or country. The Three Mile Island accident proves this view to be short-sighted, as measures that reduce the likelihood of disaster are often in the long-term interest of an emerging industry. And this is not the only lesson for the AI industry.
When social media platforms first emerged, many reacted with enthusiasm and optimism. According to a 2010 Pew Research Center survey, 67% of American adults who used social media believe it has had a mostly positive impact. Futurist Brian Solis captures this spirit: “Social media is a new way to communicate, a new way to build relationships, a new way to build businesses, a new way to build a better world.”
His answer was three-quarters correct.
Concerns over privacy violations, misinformation, and the impact on mental health have transformed public perception of social media, with 64% of Americans viewing it negatively. Scandals like Cambridge Analytica have eroded trust, while fake news and divisive content have highlighted social media's role in dividing society. A survey by the Royal Society for Public Health found that 70% of young people have experienced cyberbullying, and 91% of 16-24 year olds say social media is damaging their mental health. Users and policymakers around the world are increasingly vocal about the need for stricter regulation and greater accountability from social media companies.
This isn't because social media companies are particularly bad. Like any emerging industry, the early days were a “lawless wilderness” where companies raced to corner a rapidly growing market and government regulation was insufficient. Platforms that hosted addictive and often harmful content thrived, and we are all now paying the price. Consumer distrust grew, and these companies became targets of regulators, legislators, and courts.
The optimism around social media was not misplaced. The technology did have the potential to break down geographic barriers, foster a sense of global community, democratize information, and spur positive social movements. Author Erik Qualman warned, “We have no choice whether to use social media; the question is how well we use it.”
The lost potential of social media and nuclear energy is tragic, but it pales in comparison to the wasted potential of AI, and smart legislation like SB 1047 is our best chance of preventing this while protecting innovation and competition.
Our history of technological regulation shows our foresight and ability to adapt. When railroads transformed transportation in the 19th century, governments standardized track gauge, signals, and safety protocols. The advent of electricity created rules and standards to prevent fire and electrocution. The automobile revolution created the need for traffic laws and safety measures like seat belts and airbags. In aviation, agencies like the FAA established strict safety standards, making planes the safest mode of transportation.
History only teaches us lessons, it's up to us to listen to them.