The state of the art in artificial intelligence is advancing at breakneck speed, and the U.S. government is struggling to keep up. As someone who works on AI policy in Washington, DC, I can say that we need to get a clear view of cutting-edge AI systems before we can decide how to govern them. Right now, we're kind of navigating a fog.
My role as an AI Policy Fellow at the Federation of American Scientists (FAS) involves developing bipartisan ideas to improve the government’s ability to analyze current and future systems. In this work, I engage with experts from government, academia, civil society, and the AI industry. What I’ve learned is that there is no broad consensus on how to manage the potential risks of groundbreaking AI systems without stifling innovation. However, there is broad agreement that the U.S. government needs better information about AI companies’ technologies and practices and a greater ability to respond to both catastrophic and more insidious risks as they arise. Without detailed knowledge of modern AI capabilities, policymakers cannot effectively assess whether current regulations are sufficient to prevent misuse or mishaps, or whether companies need to take additional steps to secure their systems.
When it comes to nuclear power and aviation safety, the federal government requires timely information from private companies in those industries to ensure the public good. We need similar insights into the emerging field of AI. Otherwise, this information gap could expose us to unforeseen risks to national security or lead to overly restrictive policies that stifle innovation.
Encouragingly, Congress has been gradually working to improve the government's ability to understand and respond to new developments in AI. Since ChatGPT's debut in late 2022, AI has been taken more seriously by lawmakers in both parties and both chambers of Congress. The House of Representatives formed a bipartisan AI Task Force with instructions to balance innovation, national security, and safety. Senate Majority Leader Chuck Schumer (D-NY) held a series of AI Insights Forums to gather outside input and build the foundation for AI policy. These events fed into the bipartisan Senate AI Working Group's AI Roadmap, which outlined areas of agreement, such as “developing and standardizing risk testing and assessment methodologies and mechanisms” and an AI-focused Information Sharing and Analysis Center.
Several bills have been introduced to increase information sharing on AI and strengthen the government's ability to respond. The Senate's bipartisan AI Research, Innovation, and Accountability Act would require companies to submit risk assessments to the Department of Commerce before deploying AI systems that could affect critical infrastructure, criminal justice, or biometrics. Another bipartisan bill, the VET AI Act (FAS-endorsed), proposes a system in which independent assessors would audit and verify AI companies' compliance with established guidelines, similar to existing practice in the financial industry. These bills passed the Senate Commerce Committee in July and could be voted on in the Senate before the 2024 elections.
Other parts of the world are also seeing promising developments. In May, the governments of the UK and South Korea announced that most of the world's leading AI companies had agreed to a new set of voluntary safety commitments at the AI Seoul Summit. These commitments include leveraging responsible scaling policies pioneered by the companies last year that provide a roadmap for future risk mitigation as AI capabilities develop, and identifying, assessing and managing risks associated with the development of cutting-edge AI models. AI developers also agreed to provide transparency about their approach to cutting-edge AI safety, including “sharing with trusted parties, including their respective home governments, more detailed information that cannot be shared publicly.”
However, these commitments lack enforceable mechanisms and standardized reporting requirements, making it difficult to assess whether companies are complying with them.
Some industry leaders have spoken out in support of increased government oversight. OpenAI CEO Sam Altman emphasized this point in congressional testimony early last year, saying, “We think there could be some pretty bad outcomes if this technology goes wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.” Anthropic CEO Dario Amodei took this idea a step further, expressing that after the release of Anthropic's responsible scaling policy, he hopes the government will turn elements of the policy into a “well-designed testing and audit regime with accountability and oversight.”
Despite these positive signs from Washington and the private sector, significant gaps remain in the U.S. government's ability to understand and respond to rapid advances in AI technology. Three critical areas in particular require urgent attention: safeguarding independent research on AI safety, an early warning system for improving AI capabilities, and a comprehensive reporting mechanism for real-world AI incidents. Addressing these gaps is key to protecting national security, fostering innovation, and ensuring that AI development advances the public good.
A safe haven for independent AI safety research
AI companies often threaten to block or ban researchers from using their products if they find safety flaws, creating a chilling effect on important independent research. This leaves the public and policymakers in the dark about the potential dangers of widely used AI systems, including threats to U.S. national security. Independent research is crucial because it serves as an external check on the claims of AI developers and helps identify risks and limitations that may not be apparent to the companies themselves.
One key proposal to address this issue is that companies should provide legal safe harbors and financial incentives for good faith research into AI safety and reliability. Congress could offer “bug” bounties to AI safety researchers who identify vulnerabilities and extend legal protections to experts who study AI platforms similar to those proposed for social media researchers in the Platform Accountability and Transparency Act. In an open letter earlier this year, more than 350 leading researchers and advocates called on companies to grant such protections to safety researchers, but no companies have done so yet.
With these protections and incentives in place, thousands of American researchers would be empowered to stress test AI systems, enabling them to evaluate AI products and systems in real time. The National AI Safety Institute has included similar protections for AI researchers in its draft guidelines on “Managing the Risks of Misuse of Dual-Use Foundation Models,” and Congress should consider codifying these best practices.
Early warning system for improving AI capabilities
The U.S. government has limited approaches to identifying and addressing potentially dangerous features of cutting-edge AI systems and may be overwhelmed if new AI capabilities continue to proliferate rapidly in the future. Knowledge gaps within the industry leave policymakers and security agencies unprepared to address emerging AI risks. Worse yet, the potential impacts of this asymmetry will only worsen over time as AI systems become more risky and more widely used.
Establishing an AI early warning system would provide the government with the information it needs to stay ahead of artificial intelligence threats. Such a system would establish a formal channel for AI developers, researchers, and other stakeholders to report AI capabilities that have both civilian and military applications to the government, such as biological weapons research or enhanced cyber attacks. The Commerce Department's Bureau of Industry and Security would act as a clearinghouse to receive, triage, and forward these reports to other relevant agencies.
This proactive approach allows government officials to stay up to date on the latest AI capabilities and assess whether current regulations are sufficient or whether new safeguards are needed. For example, if advances in AI systems increase the risk of a biological attack, relevant government departments can be quickly alerted so they can respond quickly to protect the welfare of their citizens.
Reporting mechanisms for real-world AI incidents
The U.S. government currently lacks a comprehensive understanding of the harmful effects of AI systems, hindering its ability to identify risky usage patterns, evaluate government guidelines, and respond effectively to threats. This blind spot leaves policymakers ill-prepared to develop timely, informed responses.
Establishing a voluntary National AI Incident Reporting Hub would create a standardized channel for companies, researchers, and the public to confidentially report AI incidents, such as system failures, accidents, misuse, and potential hazards. The hub would be housed at the National Institute of Standards and Technology, avoiding mandates while leveraging existing expertise in incident reporting and standard setting, which would foster collaborative industry participation.
This real-world data on adverse AI events, combined with forward-looking feature reporting and researcher protections, will enable governments to develop more informed policy responses to emerging AI issues and help developers better understand threats.
These three proposals balance oversight and innovation in AI development. Encouraging independent research and increasing government visibility into AI capabilities and incidents can support both safety and technological advancement. Governments can foster public trust and accelerate AI adoption across sectors while avoiding regulatory backlash that accompanies preventable high-profile incidents. Policymakers can develop targeted regulations that address specific risks, such as AI-enhanced cyber threats and potential misuse in critical infrastructure, while maintaining the flexibility needed for continued innovation in areas like medical diagnostics and climate modeling.
Passing legislation in these areas will require bipartisan cooperation in Congress. Stakeholders from industry, academia, and civil society must advocate for this process, engage, and provide expertise to refine and implement these proposals. There is a short window of time remaining in the 118th Congress to potentially attach AI transparency policies to must-pass legislation such as the National Defense Authorization Act. Time is running out, and taking swift and decisive action now will set the stage for improved AI governance for years to come.
Imagine a future where governments have the tools to understand and responsibly guide AI development, harnessing AI's potential to solve grand challenges while avoiding risks. This future is within reach, but we need to act now to articulate a shared vision for how AI should be developed and used. Improving our shared understanding and oversight of AI will increase the chances of steering this powerful technology in a direction that benefits society.
Read 1 article in the past month
At Vox, we believe in empowering everyone to make sense of and help shape a complex world. Our mission is to create clear, accessible journalism that inspires understanding and inspires action.
If you share our vision, please consider becoming a Vox member and supporting our work. Your support helps ensure Vox has a stable, independent source of funding for our journalism. Even if you're not ready to become a member yet, even a small donation goes a long way in supporting a sustainable model of journalism.
Thank you for joining our community.
Swati Sharma
Editor-in-Chief, Vox
Join for $10/month
We accept credit cards, Apple Pay, and Google Pay.
You can also donate in the following ways: