The United States' first attempt to codify AI regulation has received strong support at a key juncture.
Elon Musk, CEO of Tesla and founder of Grok chatbot parent company xAI, has thrown his all into California's “Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act” (Senate Bill 1047).
If the bill passes the state Legislature and gets final approval from Gov. Gavin Newsom before the legislative session ends this week, it would put the first guardrails on the technology. The bill seeks to require developers to create safety protocols, allow runaway AI models to be shut down, report security incidents and give whistleblowers rights within AI companies, require companies to take steps to prevent AI from being used by malicious hackers, and hold companies liable if their AI software gets out of control.
But venture capitalists like Marc Andreessen are opposed to it, and it's also a hotly debated topic among AI gurus: Yann LeCun, chief AI officer at Meta, opposes the bill, while Geoffrey Hinton, co-creator of AlexNet, supports it.
“This is a hard call and it will upset some people, but all things considered, I think California should probably pass SB1047, the AI Safety Act,” Musk posted on Monday, referring to “risks to the public” from AI.
This is a tough call, and it will upset some people, but all things considered, I think California should probably pass SB 1047, the AI Safety Bill.
For more than 20 years, I have advocated for regulating AI, just as we do for any product or technology that poses a potential risk.
— Elon Musk (@elonmusk) August 26, 2024
The only regulatory framework that exists so far covers only the largest supercomputers with 10×26 floating-point arithmetic, which cost more than $100 million to train. But it is not a codified federal law, but an executive order by the Biden administration that could easily be rescinded by his successor next year.
The bill would at least partially alleviate that, giving big tech companies like Microsoft affiliate OpenAI, Amazon-backed Anthropic and Google some legal clarity, even if they don't necessarily agree with it.
“SB 1047 is a straightforward, commonsense, lightweight bill that builds on President Biden's executive order,” California Sen. Scott Wiener, the bill's sponsor, said earlier this month.
California's final week to pass legislation before Legislature ends term
If any state were to take over that role, California would make the most sense. Its $4 trillion economy is roughly the same size as Germany or Japan in absolute terms, but that's mainly due to a thriving technology sector in Silicon Valley. It's arguably doing a much better job of promoting innovation than either of the G7 nations.
Weiner told Bloomberg Television he was sympathetic to the argument that Washington should have moved forward, but cited a range of issues on which Congress has consistently failed to act decisively, including data privacy laws, social media and net neutrality.
“I agree that this should be addressed at the federal level,” Wiener told the station on Friday. “Congress has a very bad track record on regulating the tech sector and I don't see that changing, so California should take the lead.”
Great discussion Andrew YN We oppose SB1047, the disgraceful California regulation that will effectively kill open source AI and significantly slow or stop AI innovation. https://t.co/pZuLUXYCLR
— Yann LeCun (@ylecun) July 12, 2024
This month is the last chance for SB 1047 to pass. After the weekend, the Legislature recesses ahead of new elections in November. If it passes, it would still need to be approved by Governor Newsom by the end of September. Last week, the U.S. House of Representatives urged the governor to veto the bill if it reaches his desk.
But regulating technology can be a futile effort because policies will always lag behind the speed of innovation. Interfering in the free market can unintentionally stifle innovation, which is the main criticism of SB 1047.
Former OpenAI researcher reveals colleagues are giving up
Just a year ago, leaders of the big tech companies were largely able to thwart any attempts at outside intervention in the sector. Many policymakers understood that the U.S. was locked in a high-stakes AI arms race with China that neither side could afford to lose. Any constraints the U.S. placed on the domestic industry could tip the scales in Beijing's favor.
But a wave of senior AI safety experts has left OpenAI, the company that started the AI gold rush, raising concerns that executives, including CEO Sam Altman, are sacrificing caution in their efforts to speed up the commercialization of this incredibly expensive technology.
Daniel Kokotajiro, a former OpenAI safety researcher, told Fortune on Monday that nearly half of the AI governance staff were disappointed with the group's direction and decided to leave the group en masse.
“It's just that people are individually giving up,” Mr. Kokotajilou said in an exclusive interview, opting to give up his stock holdings in the company to avoid signing extensive non-disclosure agreements that would have barred him from talking about his former employer.
Musk himself would likely be personally affected by the bill: Last year, he founded his own artificial general intelligence startup, xAI, and he just opened a state-of-the-art supercomputing cluster in Memphis equipped with AI training chips and staffed with experts essentially plucked from Tesla.
But Musk is no ordinary challenger: He's no stranger to tech, having co-founded OpenAI in December 2015 and personally hired the company's former chief scientist. The Tesla CEO and entrepreneur then feuded with Altman, eventually deciding to sue the company not once but twice.
Recommended Newsletter: Advanced insights for high-powered executives. Subscribe to the CEO Daily newsletter for free today. Subscribe now.
Source link