Stay up to date with free updates
Simply sign up to the Artificial Intelligence myFT Digest and have it delivered straight to your inbox.
OpenAI has slammed a California bill aimed at ensuring the safe deployment of powerful artificial intelligence, suggesting new regulations would threaten its growth in the state and joining a last-minute lobbying effort by investors and AI groups to block the bill.
In a letter to California Sen. Scott Wiener, who is leading the bill, the company's chief strategy officer Jason Kwon wrote that the bill, SB 1047, threatens “California's unique position as a global leader in AI.”
“The pace of innovation could slow and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunities,” he added.
SB 1047 has divided Silicon Valley. While the need to curb the risks of powerful new AI models is widely accepted, critics argue Wiener's proposal would stifle startups, benefit U.S. rivals and undermine California's central position in the AI boom.
OpenAI is the latest startup to speak out against elements of the bill, and has become the most visible due to the popularity of its ChatGPT chatbot and a $13 billion commitment from partner Microsoft.
OpenAI supports regulations to ensure AI systems are developed and deployed safely, but argues in the letter, first reported by Bloomberg, that the laws should be enacted by the federal government, not states.
In his response Wednesday, Wiener said he agrees the federal government should take the lead but is “skeptical” that Congress will act. He also criticized the “stale argument” that tech startups will relocate if the bill passes, saying companies based out of state would still have to comply with the bill to do business in the state.
The California Legislature is scheduled to vote on the bill by the end of the month. If it passes, Governor Gavin Newsom will then have the choice to sign it into law or veto it.
Silicon Valley tech groups and investors, including Anthropik, Andreessen Horowitz and Y Combinator, have stepped up lobbying against Weiner's proposed strict safety framework. Former House Speaker and California Representative Nancy Pelosi also issued a statement last week opposing the bill, calling it “well-intentioned but ill-informed.”
Among the most controversial elements of the senators' original proposal were those that required the new government agency to guarantee that AI companies would not develop models with “dangerous capabilities” and to create a “kill switch” to shut down powerful models.
Opponents argued that the bill focuses on hypothetical risks and adds “extreme” liability risk to founders.
The bill was amended last week to relax some of its requirements, including limiting the civil liability originally imposed on AI developers and narrowing the scope of those who must comply with the rules.
Recommendation
But critics argue the bill still imposes burdensome and sometimes unrealistic requirements on startups. In a letter to California Assembly Speaker Robert Rivas on Monday, U.S. Representatives Anna Eshoo and Zoe Lofgren said “the bill's underlying structure remains deeply flawed” and urged him to instead “focus on federal regulations regulating the physical tools needed to create these physical threats.”
Despite criticism from leading AI figures including Stanford University's Fei-Fei Li and Andrew Ng, who led AI projects at Alphabet Inc's Google and China's Baidu, the bill has support among some “AI godfathers” such as University of Toronto's Geoffrey Hinton and University of Montreal computer science professor Yoshua Bengio.
“In short, SB 1047 is a very reasonable bill that would require large AI labs to do what they've already committed to doing: test large-scale models for catastrophic safety risks,” Wiener wrote Wednesday.
Video: AI: Blessing or curse for humanity? | FT Tech
Source link