In a new letter, OpenAI chief strategy officer Jason Kwon argues that AI regulation should be left to the federal government. As previously reported by Bloomberg, Kwon said a new AI safety bill under consideration in California could slow progress and cause companies to leave the state.
A federally led set of AI policies, rather than a patchwork of state laws, would foster innovation and position the United States to lead the development of global standards. That's why we join other AI labs, developers, experts, and members of the California Legislative Delegation in respectfully opposing SB 1047 and welcome the opportunity to outline some of our key concerns.
The letter was addressed to California Senator Scott Wiener, who originally introduced SB 1047, also known as the Safe and Secure Innovation Act for Cutting-Edge Artificial Intelligence Models.
According to supporters like Wiener, the bill would establish standards prior to the development of more powerful AI models, require precautions such as pre-deployment safety testing and other safeguards, add whistleblower protections for AI lab employees, give the California Attorney General the authority to take legal action if an AI model causes harm, and call for the creation of a “public cloud computer cluster” called CalCompute.
In a response to the letter published Wednesday evening, Wiener noted that the argument “makes no sense” because the proposed requirements would apply to all companies doing business in California, regardless of whether they're headquartered there. He also said that OpenAI “has not criticized a single provision of the bill,” concluding that “SB 1047 is a very reasonable bill that would require large AI labs to do what they've already committed to doing: test large-scale models for catastrophic safety risks.”
The bill now awaits a final vote before reaching Gov. Gavin Newsom's desk.
Below is the full text of OpenAI's letter.