If the whole AI thing kind of fizzles out, what does that mean for AI safety?
“Is it all hype and no substance?” is a question many have been asking about generative AI lately, pointing to delays in model releases, the slow emergence of commercial applications, the success of open source models making it harder to make a profit on proprietary models, and the high cost of this whole thing.
Many of the people crying “AI bankrupt” don't seem to have a clear grasp of the bigger picture. Some of them have long argued that generative AI has no value as a technology, but this is a view that is far removed from many very real users and uses of AI.
I also think some people have frankly ridiculous views about how quickly commercialization should happen. Even incredibly valuable and promising technologies that will ultimately be transformative take time between their invention and the first wildly popular consumer products based on them. (For example, electricity took decades to be invented before it was really widely adopted.) It seems true that “the killer app for generative AI hasn't been invented yet,” but that's no reason to assure everyone that it won't be invented anytime soon.
But I believe there are sobering “failure cases” that don't rely on misunderstandings or underestimation of technology. It seems likely that the next super-expensive model won't solve the hard problems that would justify the billion-dollar training costs. And if that happens, we'll be in for a less exciting time: more iterations and improvements to existing products, fewer earth-shattering new product releases, and less obsessive press coverage.
If this were to happen, it would also likely have a significant impact on attitudes towards AI safety, even though claims about AI safety do not in principle rely on the AI hype of the past few years.
The basic case for AI safety is one I've been writing about since long before ChatGPT and the recent AI boom. The simple case is that there's no reason to think that AI models that can reason as well as humans can, and much faster, are impossible, and we know that they would be extremely valuable commercially if developed. We also know that developing and releasing powerful systems that can act independently in the world without oversight or supervision would be extremely dangerous, because we don't know how to provide that in practice.
Many engineers working on large language models believe that we are soon going to have systems powerful enough to move safety concerns from theory to the real world. They may be right, but they could also be wrong. The opinion I most sympathize with is that of engineer Alex Irpan: “The current paradigm (simply building large language models) is unlikely to get us there. It's still more likely than I'm comfortable with.”
It's probably true that the next generation of large-scale language models won't be powerful enough to be dangerous, but many people working on them believe they will, and given the enormous impact of unchecked AI power, the possibility is not so small that it can be easily dismissed and some oversight is necessary.
How AI safety and AI hype became intertwined
In fact, unless the next generation of large-scale language models is significantly better than current models, I expect AI will still change our world, but it will do so more slowly. Many foolhardy AI startups will go out of business and many investors will lose money. But people will continue to improve the models at a fairly rapid pace, making them cheaper and ironing out their most troubling flaws.
Even the most vocal skeptics of generative AI, like Gary Marcus, tend to tell me that superintelligence is possible; they just expect it will require a new technological paradigm, some way to combine the power of large-scale language models with other approaches that compensate for their shortcomings.
Marcus calls himself an AI skeptic, but it's often hard to find much difference between his views and those of someone like Ajeya Kotla, who believes that powerful intelligent systems might be powered by language models in a similar sense to how a car is powered by an engine, but that they will require many additional processes and systems to transform the output into something reliable and usable.
People I know who are concerned about AI safety often hope that things will go this way. It means a little more time to better understand the systems we're creating, and time to see the consequences of using them before they become more powerful than we can understand. AI safety is a set of hard problems, but not insolvable ones. Given time, we'll probably be able to solve them all.
However, my impression from the public discussion of AI is that many people believe that “AI safety” is a particular worldview, inseparable from the AI boom of the past few years. Their understanding of “AI safety” is the claim that superintelligent systems will become a reality within a few years, a view supported by Leopold Aschenbrenner's “Situational Awareness” and fairly common among AI researchers at major corporations.
Unless superintelligence becomes a reality in the next few years, we will likely hear many people say, “AI safety was not necessary.”
Focus on the bigger picture
For those investing in AI startups today, whether GPT-5 gets delayed by six months or whether OpenAI next raises money at a reduced valuation is crucial.
But I think policymakers and concerned citizens should step back a bit further than that and separate the question of whether current investor bets will pay off from the question of where we are heading as a society.
Whether or not GPT-5 is a powerfully intelligent system, powerfully intelligent systems are commercially valuable, and thousands of people are working on building them from different angles. We need to think about how to approach such systems and ensure they are developed safely.
When a company loudly proclaims that it will build a powerful and dangerous system and then fails, the lesson from that should not be “we're glad we have a little more time to figure out the best policy response” but rather “we have nothing to worry about.”
As long as people are trying to build extremely strong systems, safety is important and the world cannot afford to be swayed by the hype and, as a result, to ignore it reactively.
Read 1 article in the past month
At Vox, we believe in empowering everyone to make sense of and help shape a complex world. Our mission is to create clear, accessible journalism that inspires understanding and inspires action.
If you share our vision, please consider becoming a Vox member and supporting our work. Your support helps ensure Vox has a stable, independent source of funding for our journalism. Even if you're not ready to become a member yet, even a small donation goes a long way in supporting a sustainable model of journalism.
Thank you for joining our community.
Swati Sharma
Editor-in-Chief, Vox
Join for $10/month
We accept credit cards, Apple Pay, and Google Pay.
You can also donate in the following ways: