Hello and welcome to Eye on AI.
Anticipation. That’s the title of a great old Carly Simon song. And that’s the vibe of today’s newsletter. There’s a lot of hotly awaited AI news this week.
Nvidia to report earnings
Markets will be watching Nvidia’s earnings announcement tomorrow to gauge the health of the AI boom. According to news reports, the company is expected to say that quarterly sales have more than doubled, even though the year-on-year revenue growth rate has slowed. Investors will be particularly keen to learn if rumors are true that the company’s next-generation Blackwell AI chip will be delayed due to supply chain issues, and if so, how lengthy the delays will be and how that will impact Nvidia’s revenue forecasts.
With Nvidia’s stock having largely recovered from late July’s big sell-off and now trading back close to record highs, there could be serious stock market trouble if Nvidia announces significant lags in Blackwell shipments or if there are other negative surprises in the company’s earnings. Because Nvidia is now seen as a bellwether for the entire AI boom—skeptics would say “bubble”—the fallout could spread well beyond Nvidia to other Magnificent Seven stocks and perhaps the wider S&P 500, of which Nvidia now constitutes 6.5% due to its $3.1 trillion market cap.
California’s AI bill comes to a vote
The other eagerly anticipated AI news of the week is the fate of California’s proposed AI regulation, SB 1047, which is expected to come to a vote in the State Assembly sometime this week. The bill is designed to try to head off any catastrophic risks from the largest, most powerful AI models—those that would cost more than $100 million to train—but has proved controversial as my Fortune colleagues Sharon Goldman chronicled last month and Jenn Brice laid out in short explainer we published yesterday. AI’s biggest names have lined up on opposite sides of the debate. AI godfathers Geoff Hinton and Yoshua Bengio support the bill as “a positive and reasonable step,” while fellow Turing Award winner Yann LeCun opposes it as likely to stifle innovation, as do AI pioneers Andrew Ng and Fei-Fei Li. Elon Musk has come out in support, while most of Silicon Valley’s leading venture capital firms and top AI companies such as OpenAI and Microsoft, as well as Google and Meta, are against.
Thanks to lobbying by technology companies, the bill that the California Assembly will vote on has already been watered down significantly from its earlier versions. As originally proposed by State Sen. Scott Wiener, the bill would have created a legal duty of care on the part of AI developers to ensure their models do not result in what the bill calls “critical harms”—a term it defines as causing a chemical, biological, or nuclear attack that resulted in mass casualties, autonomously killing a lot of people in some other way, autonomously committing felonies that resulted in $500 million in damage, or creating a cyberattack that caused that amount of damage. Tech companies building AI systems would have been required to institute safety procedures to prevent AI systems from causing these harms and to prevent anyone from modifying them after they’d been trained so that the models might cause these harms. Model developers would also have to retain the ability to fully shut down a model if it could cause serious problems. A new state agency would have been set up to ensure compliance with the law and California’s attorney general would have been able to sue companies for negligence if the agency determined the correct protocols were not being followed, even before a model was trained and deployed.
The version that is coming to a vote this week no longer establishes the new state AI regulator and no longer lets the AG act in advance of any actual incident. Instead, the Attorney General’s office will take on much of the compliance monitoring role that the AI agency was to have performed in the original version. AI developers will have to hire an outside auditing firm to ensure compliance and this firm will submit reports annually to the AG’s office. But law enforcement can only sue the AI developers for liability after a catastrophic incident has occurred.
Still, if SB 1047 passes it will be a watershed moment for AI regulation in the U.S., which has so far lagged the European Union—as well as China—in passing laws governing the training and use of AI. In the absence of Congress passing any AI laws—something which won’t happen until well after the next election—the California law may become a de facto national standard due to the presence of so many tech companies in the state.
If nothing else, the debate over the bill has been clarifying. As an article in the liberal political journal The Nation noted, SB 1047 has been a “mask-off moment” for the AI industry. It is ironic—and telling—to see companies such as OpenAI, whose CEO Sam Altman went before Congress and practically begged for AI regulation, or Microsoft, which has proposed AI model developers institute extensive know-your-customer requirements that are not dissimilar to those contained in SB 1047, line up to oppose the bill. If we ever thought maybe these guys were sincere when they said publicly that they wanted to be regulated, now we know the truth. We should never have given them the benefit of the doubt.
It’s perhaps revelations such as this that have disillusioned many of those working on AI safety inside top AI companies. It turns out that a large portion of the AI safety researchers at OpenAI have departed the company in recent months, according to reporting from my fellow Fortune AI reporter Sharon.
Whether we believe that AI models powerful enough to cause significant, large-scale harm are close at hand, the departure of these researchers should trouble us because of what it may say about how cautious and safety-minded OpenAI and other companies are being about the models they are releasing currently. To date, some of the best methods to limit near-term risks from AI models—such as their ability to spew toxic language or recommend users’ self-harm—have come from AI safety researchers thinking about how to control future superpowerful AI.
As for regulation, I’m generally in favor of steps that would ensure AI doesn’t cause significant harm. But I don’t think state-by-state regulation makes much sense. Instead, we urgently need national rules and, probably, a national AI regulator similar to the state-level one originally proposed in the California bill. But we’ll see if we wind up getting one.
Jeremy Kahn
[email protected]
@jeremyakahn
Before we get to the news. If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my new book, Mastering AI: A Survival Guide to Our Superpowered Future. It’s out now in the U.S. from Simon & Schuster and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.
A special digital issue of Fortune
The best stories of July and August from Fortune, including a radical overhaul at a private equity titan, a crisis for the First Family of poultry, and more.
— KKR’s co-CEOs want to reach $1 trillion in assets by 2030. To do so, they’re willing to make big bets and leave the PE firm’s old ways behind. Read more.
— John Randal Tyson was set up to run his family’s $21 billion chicken empire. His erratic behavior could change that. Read more.
— Jeff Bezos’s famed management rules are slowly unraveling inside Amazon. Read more.
— A 25-year-old crypto whiz kid went from intern to president of Jump Trading’s crypto arm. Then he became the fall guy. Read more.
— An inside look at a secretive investment firm that counts some of the wealthiest Americans as clients and some of Silicon Valley’s most powerful figures as advisors. Read more.
— Can you quit Ozempic and stay thin? These startups say you can—but doctors say that’s an unproven claim. Read more.
AI IN THE NEWS
OpenAI reportedly in push to launch new reasoning model this fall. That’s according to a story in tech publication The Information, which cited two unnamed sources who it said had been involved in the effort to develop the AI model, which it said was codenamed “Strawberry.” The model, the publication said, is the same model mentioned in earlier reports under the name Q* (pronounced Q-star). The model is thought to be able to better reason its way through tough problems. OpenAI employees had tested its capabilities by getting it to solve the New York Times’ “Connections” word association game that has stumped other AI systems, The Information reported. The Strawberry model could be released, perhaps as part of an update to ChatGPT, as soon as this fall, the publication said. It also said OpenAI was using Strawberry to help train an even larger, more powerful successor model to GPT-4, codenamed Orion. The company is looking to raise new funding as well, The Information said, and that Strawberry’s release could be part of an effort to convince investors that OpenAI can stay at the forefront of the AI race.
A growing number of Fortune 500 companies see AI regulation as a business risk. In recent Securities and Exchange Commission filings, 27% of Fortune 500 companies cited AI regulation as a risk, according to the Wall Street Journal. The paper quoted a number of executives at these large corporations saying that uncertainty about future AI rules was slowing AI adoption and that complying with new regulations could impose significant compliance costs.
Chinese export controls on raw materials used in semiconductors could restrict AI chip supplies. China’s limits on the export of two metals, germanium and gallium, both used in semiconductor manufacturing as well as in military communications equipment could throttle the production of AI chips, the Financial Times reported. China, which produces 98% of the world’s gallium and 60% of the world’s germanium, imposed the export controls on the two materials last year. While Beijing said the move was for national security reasons it was widely interpreted as retaliation for U.S. restrictions on the supply of advanced computer chips to China. The Chinese export controls have driven the price of the two metals up sharply and are causing supply chain fears.
U.K. unions are gearing up to push for AI regulation and reskilling funds. Accord, which represents banking workers, plans to call on employers to fund major reskilling programs for workers as they rush to embrace AI software that could displace jobs, according to a story in the Financial Times. Meanwhile, the newspaper says, Unite, one of Britain’s largest unions—representing employees in construction, manufacturing, logistics, and transport—is also likely to back calls for the British government to enact rules around how employers use AI that would seek to limit any negative effects on employment and inequality from the technology. The unions are likely to have a larger influence on technology policy because the U.K.’s new Labour Party-led government draws much of its support from organized labor.
EYE ON AI RESEARCH
AI could expose people to new possibilities, or make their worlds much narrower. One of the things that most worries me about a future in which AI agents and digital assistants increasingly mediate our interactions with the digital world is whether our reliance on these AI helpers will simply reinforce and exacerbate filter bubbles, and narrow our worldview and our choices. In a recent research paper published in the Journal of the Association for Consumer Research a large group of business school professors looked at how AI would likely increasingly limit consumer choice. The researchers classified the main causes of this narrowing of perspective into three main areas.
One is that people will simply defer product research and buying decisions to future AI agents. This “agency transference,” as the researchers called it, will inherently make it less likely that consumers will seek out additional choices. They may, in fact, increasingly lose the skills today’s digitally savvy shoppers might have.
Another issue is that AI systems, by their very nature, tend to return answers from the median of the available data distribution. This means that the tails of the distribution tend to be cut off, narrowing the possible range of choices. They called this problem parametric reductionism.
Finally, there is the risk that people’s interactions with AI assistants and chatbots will be limited, either by deficiencies in how the technology works (resulting in people expressing themselves in more simplified terms than they would when speaking to a human) or because people generally don’t feel able to communicate their full range of experiences and personalities online. (Although, it should be noted that several studies have found that people will often confide more truthfully in an AI chatbot than in a person.) The researchers called this problem constrained expression.
It’s possible to envision that AI systems would overcome these three challenges, deliberately exposing us to new and different perspectives and experiences. But it would have to be an AI system that was very deliberately and carefully designed to do so. Otherwise, we may find ourselves inhabiting ever smaller worlds of information and ideas. You can read the business professors’ research here.
FORTUNE ON AI
Exclusive: Microsoft-backed Space and Time raises $20 million to merge AI and blockchain —by Leo Schwartz
The hidden reason AI costs are soaring—and it’s not because Nvidia chips are more expensive —by Sharon Goldman
Exclusive: Ambience Healthcare’s AI technology launches at John Muir Health —by Allie Garfinkle
$5.3 billion sale of Darktrace to move forward despite tragic yacht death of founding investor Mike Lynch —by Luisa Beltran
AI CALENDAR
Aug. 28: Nvidia earnings
Sept. 10-11: The AI Conference, San Francisco
Sept. 10-12: AI Hardware and AI Edge Summit, San Jose, Calif.
Sept. 17-19: Dreamforce, San Francisco
Sept. 25-26: Meta Connect in Menlo Park, Calif.
Oct. 22-23: TedAI, San Francisco
Oct. 28-30: Voice & AI, Arlington, Va.
Nov. 19-22: Microsoft Ignite, Chicago, Ill.
Dec. 2-6: AWS re:Invent, Las Vegas, Nev.
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
BRAIN FOOD
Why fears of ‘model collapse’ are driving demand for data licensing. The New York Times ran an excellent piece looking at the phenomenon of “model collapse.” That’s what happens when an AI model is successively trained only on its own outputs. Because these models tend to draw most heavily from the modes of a data distribution, leaving off the tails, over several generations of training, the distribution gets narrower and narrower, eventually collapsing entirely to a single example that the model thinks is paradigmatic but which humans recognize as completely divorced from the intended outputs. The Times article is worth reading because it contains some great visual examples of this using fairly simple datasets. But the newspaper also notes that the same problem will affect much more complex models as more and more synthetic data creeps into their training—something that is increasingly happening as AI-generated content increasingly populates the internet.
One thing the paper mentions but doesn’t explore in depth is how fears about the effects of synthetic data leading to model collapse are causing AI developers to increasingly seek new sources of real-world data. And at this point, the best sources of real-world data are often unavailable on the public internet—which has already been mined for AI training and which increasingly contains AI-generated, low-quality examples. Instead, companies are having to look for privately-held data, and that means, by its very nature, that these companies are having to pay to license that data. This is creating an interesting new market for data licensing—and interesting new revenue streams for organizations sitting on large amounts of private data.