Hello everyone and welcome to TechCrunch's regular AI newsletter. If you'd like to receive this newsletter in your inbox every Wednesday, sign up here.
Say what you want about generative AI, but it's becoming a commodity. At least, that's how it appears.
In early August, Google and OpenAI both significantly cut the prices of their most budget-friendly text generation models: Google cut the input price (the cost of having the model process text) by 78% and the output price (the cost of having the model generate text) by 71% for Gemini 1.5 Flash, while OpenAI cut the input price of GPT-4o in half and the output price by a third.
By one estimate, the average cost of inference (essentially the cost of running a model) is falling at a rate of 86% per year. So what's driving this?
Firstly, in terms of features, there isn't much to differentiate between the various flagship models.
“Absent a unique differentiator, we expect pricing pressures to continue across all AI models. In the absence of consumption or increased competition, all of these providers will need to price aggressively to retain customers,” said Andy Turai, principal analyst at Constellation Research.
Gartner VP analyst John Lovelock agrees that commoditization and competition are the cause of the recent downward pressure on model prices. He points out that models have been priced on a cost-plus basis since the beginning, meaning they are priced to recoup the millions spent to train the model (OpenAI's GPT-4 reportedly cost $78.4 million) and the server costs to run it (ChatGPT once cost OpenAI about $700,000 a day). But now data centers have reached a size and scale where they can qualify for discounts.
Vendors such as Google, Anthropic, and OpenAI are employing techniques such as prompt caching and batching to achieve further cost savings. Prompt caching allows developers to save specific “prompt context” that can be reused across API calls to a model, while batching processes asynchronous groups of lower priority (and therefore cheaper cost) model inference requests.
The release of major open models like Meta's Llama 3 may also be influencing vendor pricing. The biggest and most powerful of these aren't necessarily the cheapest to run, but they can be cost-competitive with vendors' products when running on a company's in-house infrastructure.
The question is whether the price declines are sustainable.
Generative AI vendors are burning through cash fast: OpenAI is said to be on track to lose $5 billion this year, while rival Anthropic predicts it will lose more than $2.7 billion by 2025.
Lovelock believes high capital and operational costs may force vendors to adopt entirely new pricing structures.
“With the next-generation model estimated to cost hundreds of millions of dollars to develop, how will cost-plus pricing translate to consumers?” he asked.
You'll see it soon.
news
Musk Supports SB 1047: Elon Musk, CEO of X, Tesla, and SpaceX, has voiced his support for SB 1047, a California bill that would require makers of very large AI models to create and document safeguards against those models causing significant harm.
AI Overview is bad at Hindi: Google's AI Overview, which responds to certain search queries and provides AI-generated answers, makes a lot of mistakes in Hindi, Ivan writes, such as suggesting “sticky things” as something to eat in summers.
OpenAI Backs AI Watermarking: OpenAI, Adobe, and Microsoft have backed a California bill that would require tech companies to label AI-generated content, with the bill set for a final vote in August, Max reports.
Inflection Adds Cap to Pi: Inflection, the AI startup whose founders and most of its staff were poached by Microsoft five months ago, plans to limit free access to its chatbot, Pi, as the company shifts its focus to enterprise products.
Stephen Wolfram on AI: Ron Miller interviews Wolfram Alpha founder Stephen Wolfram, who says that the growing influence of AI and the questions it raises will usher in a new “Golden Age” in philosophy.
Waymo drives kids: Alphabet subsidiary Waymo is reportedly considering a subscription program that would allow teenagers to independently hail the company's vehicles and send notifications to the children's parents to pick them up and drop them off.
DeepMind Employees Protest: Some employees at DeepMind, Google's AI research and development division, are reportedly unhappy with reports about Google's defense contracts and have circulated a letter to that effect within the company.
AI Startups Drive SVP Buying: VCs are increasingly buying shares of later-stage startups on the secondary market, often in the form of financial vehicles called special purpose vehicles (SVPs), in an attempt to acquire stakes in the hottest AI companies, Rebecca writes.
Research Paper of the Week
As I've written before, many AI benchmarks don't tell us much: they're too simple, too difficult, or just plain wrong.
Researchers at the Allen Institute for Artificial Intelligence (AI2) and others have recently released a testbench called WildVision, specifically aiming to develop better evaluations of visual language models (VLMs) – models that can understand both pictures and text.
WildVision consists of an evaluation platform that hosts about 20 models, including Google's Gemini Pro Vision and OpenAI's GPT-4o, and a leaderboard that reflects people's preferences in chatting with the models.
In developing WildVision, the AI2 researchers say they found that even the best VLMs hallucinate and struggle with contextual cues and spatial reasoning. “Our comprehensive analysis points to future directions for evolving VLMs,” the researchers wrote in a paper accompanying the release of the test suite.
Model of the Week
While not technically a model, this week Anthropic launched its Artifacts feature for all users, which turns conversations with the company's Claude models into apps, graphics, dashboards, websites and more.
Released in preview in June, Artifacts is now available for free on the web and in Anthropic's Claude app for iOS and Android, providing a dedicated window into the work created in Claude, where users can publish and remix their artifacts with the wider community, while subscribers to Anthropic's Team plan can share their work in a more locked-down environment.
Michael Gerstenhaber, product lead at Anthropic, explained Artifacts in an interview with TechCrunch: “Artifacts are model outputs that allow us to set aside generated content and allow users to iterate on it. For example, let's say we want to generate code. In that case, the Artifacts are put into the UI, and then we can talk to Claude and iterate on the documentation, improving it so that the code can run.”
Notably, Poe, Quora’s subscription-based cross-platform aggregator of AI models including Claude, has a feature called Previews that is similar to Artifacts. However, unlike Artifacts, Previews is not free and requires you to pay $20 per month for Poe’s premium plan.
Grab Bag
OpenAI may have Strawberry up its sleeve.
This comes from The Information, which reports that the company is about to release a new AI product that can reason about problems better than existing models. Strawberry (previously known as Q*, which I wrote about last year) is said to be able to solve never-before-seen complex math and programming problems, as well as word puzzles like The New York Times' Connections.
The downside is that it takes a while to “think” – and it's unclear how long that will take compared to OpenAI's current best model, GPT-4o.
OpenAI hopes to release a model this fall that incorporates Strawberry in some form on its AI-powered chatbot platform ChatGPT. The company is also reportedly using Strawberry to generate synthetic data for training models, including its next flagship model, codenamed Orion.
Expectations for Strawberry are very high among AI enthusiasts. Can OpenAI live up to them? It's a tough call, but we hope that they can at least improve ChatGPT's spelling ability.