Jakub Polzycki | Nurphoto | Getty Images
In the wake of growing concerns within the industry about the safety and ethics of artificial intelligence (AI), two of the most highly valued AI startups, OpenAI and Anthropic, have agreed to have their new models tested by the US AI Safety Lab before releasing them to the public.
“The lab, which is housed within the Commerce Department's National Institute of Standards and Technology (NIST), will have access to key new models from each company both before and after their public release,” it said in a press release.
The group was established after the Biden-Harris administration issued the U.S. government's first executive order on artificial intelligence in October 2023, mandating new safety assessments, fairness and civil rights guidelines, and research on AI's impact on the labor market.
“We are pleased to have reached an agreement with the US AI Safety Institute for pre-launch testing of future models,” OpenAI CEO Sam Altman said in a post on X. OpenAI confirmed to CNBC on Thursday that the company has doubled its number of weekly active users to 200 million over the past year. Axios was first to report the figure.
The news comes a day after reports surfaced that OpenAI was in talks to raise $100 billion or more to value the company. Thrive Capital is leading the round and will invest $1 billion, according to a source familiar with the matter who asked not to be identified because the details are confidential.
Founded by former OpenAI research executives and employees, Anthropic was last valued at $18.4 billion. Anthropic counts Amazon as a lead investor, while OpenAI has strong backing from Microsoft.
According to Thursday's announcement, the agreement between the government, OpenAI and Anthropic “will enable joint research into how to assess functionality and safety risks, and how to mitigate those risks.”
“We strongly support the mission of the National AI Safety Institute and look forward to working together to develop best practices and standards for AI model safety,” OpenAI chief strategy officer Jason Kwon told CNBC.
Anthropik co-founder Jack Clark said the company's “working with the US AI Safety Institute will leverage their extensive expertise to rigorously test our models before they are widely deployed” and “strengthen our ability to identify and mitigate risks, advancing responsible AI development.”
Many AI developers and researchers have raised concerns about the safety and ethics of an increasingly commercialized AI industry. Current and former OpenAI employees published an open letter on June 4th, describing potential problems with the rapid advancement of AI and the lack of oversight and whistleblower protections.
“We believe that AI companies have strong economic incentives to avoid effective oversight, and that bespoke structures of corporate governance will not be enough to change this,” they wrote, adding that AI companies “cannot be expected to voluntarily share information” because “they currently have only limited obligations to share some of this information with governments and no obligations to share it with civil society.”
Days after the letter was released, a source familiar with the matter confirmed to CNBC that the FTC and Department of Justice plan to launch an antitrust investigation into OpenAI, Microsoft, and Nvidia. FTC Chair Lina Khan described the agency's action as a “market investigation into investments and alliances being formed between AI developers and large cloud service providers.”
The California Assembly passed an AI safety bill on Wednesday and sent it to Governor Gavin Newsom, who has until Sept. 30 to decide whether to veto or sign the bill. The bill would require safety testing and other safeguards for AI models that have a certain cost or computing power, and has been opposed by some technology companies, who say it could slow innovation.
Video: Google, OpenAI, and others oppose California's AI safety bill