OpenAI and Anthropic have agreed to make major new AI models available to the U.S. government prior to their release to help improve safety.
The two companies signed a memorandum of understanding with the U.S. AI Safety Institute to provide access to the models before and after they are released, the agency said Thursday. The government said the move will allow the companies to work together to assess safety risks and mitigate potential problems. U.S. officials said they would work with their British counterparts to provide feedback on safety improvements.
Sharing access to AI models is an important step at a time when federal and state legislatures are considering how to put guardrails on the technology without stifling innovation. On Wednesday, the California Legislature passed the Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act (SB 1047), requiring AI companies in California to take certain safeguards before training advanced underlying models. Despite pushback from AI companies such as OpenAI and Anthropic over possible negative impacts on smaller open source developers, some changes have been made since and are still awaiting Governor Gavin Newsom's signature.
Elizabeth Kelly, president of the US AI Security Institute, said in a statement that the new agreement “is just the beginning, but marks an important milestone as we work to responsibly secure the future of AI.”