The California state legislature passed the first-ever AI safety bill, SB 1047, which requires creators of next-generation artificial intelligence models to develop safety plans to prevent large-scale damage and injuries. The state has more AI activity than any other, and this is the first step in regulating AI in the state. Supporters call the bill common sense and lightweight regulation. The industry that stands to profit hugely from these models is vehemently opposed and is doing everything in its power to kill the bill.
Despite this resistance, the bill has been moving smoothly through the House. In its first vote in the House today, it received 41 votes in favor and 9 against, with the support of at least two Republicans. The House vote results will be announced tonight. The bill now goes back to the Senate for a vote. It passed the Senate with 32 votes in favor, 1 vote against, and 7 abstentions, so it is expected to pass easily in the Senate.
In the run-up to the final vote, the California Chamber of Commerce was touting poll results showing that a majority of voters surveyed opposed SB 1047.
But while the Chamber published the poll in its opinion-influencing Politico California newsletter, it didn't initially release the full poll results. (Results were quietly added in the afternoon or evening.) After getting the results, it was easy to see why: Here's how more than 1,000 Californians were informed about the bill between Aug. 9 and 12.
Lawmakers in Sacramento have proposed a new state bill, SB 1047, that would create a new regulatory agency in California that would determine how AI models are developed. The new law could make small startups pay tens of millions of dollars in fines if they don't follow the orders of state government bureaucrats. Some say burdensome regulations like SB 1047 could drive companies out of state or even the country, taking investment and jobs away from California. Given what you just read, do you support or oppose proposals like SB 1047?
After hearing all this, 28 percent of respondents supported the bill, 46 percent opposed it, and 26 percent were neutral.
The poll was conducted by Adam Rosenblatt of Bold Decision, who had no comment at the time of publication. Dennis Davis, representing the California Chamber of Commerce, responded to me that the poll questions were taken directly from the bill’s language, highlighting the language regarding fines developers could face. However, SB 1047 only applies to unprecedentedly large and expensive AI models. Specifically, developers who spend more than $100 million on training, more than any other known model estimate. If a model causes harm or poses an imminent threat to public safety and the developer ignores its own safety plan or fails to meet industry best practices, the California Attorney General can sue.
As Wiener points out, all the major AI companies have signed up to voluntary safeguards.
SB 1047 applies to anyone doing business in California, the world's AI capital and fifth-largest economy. Several researchers from major AI companies I spoke to scoffed at the idea that companies would change their behavior to avoid the bill. (One called the idea of a talent exodus “bullshit,” while another called it “complete nonsense.”)
A Democratic strategist unrelated to the bill wrote to me that the Chamber's polling question was “really badly biased… This is bad polling practice, and I'm actually surprised that after such an intrusive question, the bill only got 46 percent opposed.”
In response to the poll, state Sen. Scott Wiener, author of SB 1047, wrote, “This is the most extreme and manipulative push poll question I've ever seen. It speaks to a made-up bill that nobody wrote.”
Even Politico couldn't ignore the wide variation in opinion on the bill in other polls. A just-released poll from the AI Policy Institute (AIPI), a pro-regulation polling organization, put support for the bill at 70 percent. AIPI conducted two statewide surveys on SB 1047, finding that 59 percent of respondents supported it in July and 65 percent in early August.
The author of the Politico newsletter did not respond to questions at the time of publication.
For comparison, here is the first question from the latest AIPI survey:
Some policymakers have proposed Senate Bill 1047 in California, which would require companies developing advanced AI to conduct safety testing and hold AI model developers liable if their AI models cause catastrophic harm and they fail to take appropriate precautions.
The Democratic strategist wrote that the AIPI poll “at least attempts to present a balanced debate, and while it's debatable which specific pro and con arguments should have been used, it is a much fairer question.”
The lobbying arm of the Center for AI Safety (CAIS), a co-sponsor of SB 1047, conducted its own statewide poll in May and found support for the bill at 77 percent.
California Economic Security Commissioner Teri Ole, one of the bill's co-sponsors, wrote me that the poll was “questionable at best and malicious at worst,” adding that it “seems more like a flimsy attempt to manipulate public opinion than an honest attempt to gauge it…It is particularly telling that, despite these leading questions, it still failed to garner a majority of opposition.”
The tech industry isn't the only one taking unusual steps to block the AI safety bill. On August 15, eight California Democrats called on Governor Gavin Newsom to veto the bill. The following day, former House Speaker Nancy Pelosi also issued a statement, marking the first time in her decades in Congress that a fellow Democrat has opposed a state-level bill.
The congressional letter was organized by Rep. Zoe Lofgren, the ranking Democrat on the House Science, Space, and Technology Committee and ranking member of the House Judiciary Committee, and was signed by Reps. Anna Eshoo and Ro Khanna, three of whom represent Silicon Valley communities.
Open Secrets' analysis of data on the top 20 career donors found that they received more than $2.7 million combined from large AI companies and AI investors, plus another $1.5 million from software companies that are part of trade groups that oppose the bill, totaling $4.2 million (the figures include contributions from employees of these companies). This amount accounts for nearly half of the total contributions from each of the top 20 career donors.
Google, which released its own letter opposing SB 1047, was the largest donor to the three lawmakers, giving a combined total of about $1 million. Lofgren's daughter works on Google's legal team, and as The Prospect and The Intercept have previously reported, Lofgren has been a leading opponent of bills that would regulate big tech companies.
The congressional letter mirrored much of the language and arguments used in letters from lobbyists for big tech companies and industry groups.
According to Nancy Pelosi's 2023 financial disclosure, her family owns between $16 million and $80 million in stocks and options in Amazon, Google, Microsoft, and NVIDIA. In June and July 2024, Nancy's husband, Paul Pelosi, purchased 20,000 shares of NVIDIA stock for an estimated total of $2.4 million. Also in July, Paul Pelosi sold 5,000 shares of Microsoft stock for an estimated total of $2.1 million.
Andreessen Horowitz (aka a16z), arguably the richest venture capital firm in the world, is spearheading an all-out campaign to kill SB 1047. Fast Company reported that the firm has hired Jason Kinney to lobby Governor Newsom to veto the bill. The lobbyist got a career boost after Governor Newsom was infamously caught violating state COVID rules at Kinney's birthday party at the French Laundry.
Y Combinator (YC), the prestigious startup accelerator formerly run by OpenAI CEO Sam Altman, has hired its own lobbyist with close ties to Newsom and launched its first formal lobbying campaign in California.
A16z and YC position themselves as companies that speak for “small tech,” but have invested in OpenAI, which opposes SB 1047. A16z has also invested in Facebook, where co-founder Marc Andreessen sits on its board.
Because SB 1047 has overwhelming support in the California Legislature, opponents believe that Governor Newsom's veto is his last, and best, chance of repealing the regulation.
There's a recent precedent for this: In California, a measure to tax big tech platforms to fund local journalism was watered down in a last-minute backroom deal engineered by Google, with the tax being replaced with voluntary contributions from Google and the state, much of which is going to a “national AI accelerator.”
Opposition from the AI industry has been fierce, but not entirely uniform: After SB 1047 was amended to address many of Anthropic's concerns, the leading AI company released a letter only weakly supporting the bill, writing that the updated version's “benefits are likely to outweigh the costs.”
And in a last-minute move that surprised many, Elon Musk endorsed SB 1047 on Monday. Musk has publicly expressed concerns about the risks of AI for at least a decade (though he also co-founded OpenAI and now leads xAI). Dan Hendricks, director of CAIS and safety advisor to xAI, wrote in TIME that Musk's endorsement came just days after Hendricks discussed the bill, which he helped write. (Musk's endorsement was not driven by personal loyalty to Weiner, who he called a “pedophile apologist” in July.)
In the US, the technology has been largely left to self-regulate, with mixed results. Over tacos in San Francisco in February, a senior safety researcher at a major AI company told me that the self-governance approach taken by OpenAI and Anthropic works until we get close to human-level AI, after which competitive pressures win out. And whether they're right or not, the big players say human-level AI could be here within five years. The leaders of all these companies, and many of the pioneers of deep learning, also say that human-level AI could lead to extinction.
A researcher at a major AI company wrote to me that the terms required by SB 1047 are actually not that bad. But going from non-binding commitments to enforceable commitments sets an important precedent that many in the industry want to avoid at all costs.
“It's not surprising that the public doesn't trust big tech companies to self-regulate,” Ole wrote to me. “They don't even have the capacity to ask honest questions.”
A previous version of this article suggested that a16z's partners had early access to the polling data, which was not the case. We sincerely apologize for this error.