Since the AI boom began, attention has been focused on the technology not just for its world-changing potential, but also fears that it might fail. So-called AI pessimists suggest that artificial intelligence could become so powerful that it could start a nuclear war or enable a massive cyber attack. Even top AI industry leaders say the technology is too dangerous and needs to be tightly regulated.
A bill garnering attention in California is poised to do just that. Introduced in February by state Sen. Scott Wiener, Senate Bill 1047, seeks to stave off the worst effects of AI by requiring companies to implement certain safeguards. Wiener disagrees that the bill is being viewed as a cataclysmic bill. “AI has the potential to make the world a better place,” he told me yesterday. “But like any powerful technology, AI brings both benefits and risks.”
SB 1047 would impose several safety regulations on AI models, which can cost more than $100 million to train. Under the proposed law, companies that create such models would have to submit plans outlining risk management procedures, agree to annual audits by a third party, and be able to turn off the technology at any time (i.e. install a kill switch). AI companies could be fined if their technology causes “significant harm.”
The bill, which is expected to be voted on in the coming days, has met with fierce resistance. Technology companies including Meta, Google and OpenAI have expressed concerns. Opponents argue that the bill will stifle innovation, hold developers liable for user abuse and drive AI businesses out of California. Eight Democratic lawmakers wrote a letter to Gov. Gavin Newsom last week, saying it was “somewhat unusual” to comment on state law but they felt compelled to do so. In the letter, the lawmakers expressed concern that the bill is overly focused on the most dire effects of AI and “creates unnecessary risks to California's economy with little benefit to public safety.” They called on Gov. Newsom to veto the bill if it passes. Additionally, House Speaker Nancy Pelosi also commented separately on Friday, calling the bill “well-intentioned but poorly informed.”
The debate over this bill gets to the heart of AI: Will the technology end the world, or are people just watching too much science fiction? At the center of it all is Weiner. California is home to many AI companies, so this bill could have big national ramifications if it passes. I met with the state senator yesterday to discuss his what he calls his “hardline politics” stance on the bill, and whether he truly believes AI is capable of going berserk and launching nuclear weapons.
Our conversation has been condensed and edited for clarity.
Caroline Mims Nice: Why was this bill so controversial?
Scott Wiener: When you try to regulate any industry, even lightly, and this bill is lightly, you're going to get pushback. And particularly in the technology industry, which is an industry that's very used to not being regulated in the public interest. And I say this as someone who's supported the technology industry in San Francisco for many years. I'm not anti-technology by any means. But you have to look at the public interest as well.
The backlash is not surprising at all. And I respect that backlash. That's democracy. I have no respect for the fear-mongering and misinformation that Andreessen Horowitz and others are spreading. (Editor's note: Andreessen Horowitz, also known as a16z, did not respond to a request for comment.)
Nyce: What in particular is bothering you?
Weiner: Startup founders have been hearing that SB 1047 could put them in jail if their models cause unintended harm, which is a complete lie and a myth. Aside from the fact that the bill doesn't apply to startups (you'd have to spend over $100 million training your models to qualify), this bill won't put anyone in jail. There were some inaccurate statements made about open sourcing.
These are just a few examples. There are a lot of inaccuracies, exaggerations and sometimes misstatements about this bill. Listen, I'm not naive. I come from San Francisco politics. I'm used to hardline politics. And this is hardline politics.
NICE: You've also received backlash from politicians at the national level. What did you think about the letter from the eight members of Congress?
Wiener: I respect the signatories of this letter, but I respectfully and strongly disagree with them.
In an ideal world, all of this should be handled at the federal level. All of it. When I wrote California's net neutrality law in 2018, I made it clear that I would be happy to quit my job if Congress passed a strong net neutrality law. California passed that law, but six years later, Congress still has no net neutrality law.
It would be great if Congress could go ahead and pass strong federal AI safety legislation, but given the track record so far, I'm not hopeful.
NICE: Let's look at some of the common criticisms of this bill. First, the bill takes a pessimistic view. Do you really think that AI could be involved in the “production and use” of nuclear weapons?
Wiener: Let me be clear: this is not a pessimistic bill. Opponents argue that this bill focuses on “sci-fi risks.” They say that anyone who supports this bill is pessimistic and crazy. This bill is not about Terminator risks. This bill is about very specific, large harms.
This would be a huge disservice if we're talking about AI models shutting down power grids, or massively disrupting banking systems, or making it much easier for bad actors to do so. We know that there are people trying to do that today, and they've sometimes been successful, albeit in limited ways. Imagine what would happen if it became much easier and more efficient.
When it comes to chemical, biological, radiological and nuclear weapons, it's not a question of what Google can find out, it's a question of what AI can find out much more easily and efficiently.
NICE: The next criticism of the bill is that it doesn't address the real harms of AI, like job losses and biased systems.
Wiener: This is classic whataboutism. There are a whole range of risks with AI: deepfakes, algorithmic discrimination, job losses, misinformation, etc. These are all harms that we should address and try to prevent from happening. There are bills moving forward to do that. But in addition, we should be preemptive of these catastrophic risks and try to reduce the likelihood of them happening.
NICE: This is one of the first major AI regulatory bills to get national attention. I'm curious about your experience and what you've learned.
Wiener: We certainly learned a lot about the AI factions, for lack of a better term, the effective altruists and the effective accelerationists. It's like the Jets and the Sharks.
As is human nature, both sides will try to caricature and demonize each other. Effective accelerationists classify effective altruists as crazy pessimists. Some effective altruists classify all effective accelerationists as extreme liberals. Of course, as with human existence and human opinions, it's a spectrum.
Nice: Overall, you don't seem too annoyed.
Wiener: This legislative process, while frustrating at some of the inaccurate statements about this bill, has actually been a very thoughtful process in many ways, with many people who have very thoughtful opinions, whether for or against. I'm honored to be a part of a legislative process that so many people care about, because this issue actually matters.
When opponents say the risks of AI are “science fiction,” we know that's not true, because if they really thought the risks were science fiction, they wouldn't be voting against the bill. They wouldn't care, because it's all just science fiction. But it's not science fiction. It's real.