How many times does the letter “R” appear in the word “strawberry”? According to leading AI products like GPT-4o and Claude, the answer is 2 times.
Massive language models can write an essay or solve an equation in seconds, synthesizing terabytes of data faster than a human can open a book. But these seemingly omnipotent AIs sometimes screw up spectacularly and turn their mistakes into viral memes, leaving us to rejoice with relief that we might still have time before we succumb to our new AI overlords.
The inability of large-scale language models to understand the concepts of letters and syllables illustrates a larger truth that we often tend to forget: these things don't have brains. They don't think like us. They're not human, or even particularly human-like.
Most LLMs are built on Transformers, a type of deep learning architecture. Transformer models split text into tokens, which can be complete words, syllables, or characters, depending on the model.
“LLM is based on this transformer architecture, but it's not actually reading text. You type in a prompt and it converts it into an encoding,” Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch. “When LLM looks at the word 'the,' it has one encoding that represents the meaning of 'the,' but it doesn't know about 'T', 'H', and 'E.'”
This is because Transformers cannot efficiently ingest or output actual text. Instead, text is converted into a numerical representation that is then contextualized to help the AI derive a logical response. In other words, an AI might know that the tokens “straw” and “berry” make up “strawberry,” but it may not understand that “strawberry” is made up of the letters “s,” “t,” “r,” “a,” “w,” “b,” “e,” “r,” “r,” and “y” in a particular order. Thus, an AI cannot tell how many letters there are in the word “strawberry,” much less how many “r”s there are.
This is not an easy problem to fix, as it is built into the very architecture that makes LLM work.
TechCrunch's Kyle Wiggers took a closer look at the issue last month, speaking with Sheridan Feucht, a PhD student at Northeastern University who is researching interpretability in law programs.
“It's kind of hard to avoid the question of what exactly a 'word' should be for a language model, and even if human experts could agree on a perfect token vocabulary, the model would probably find it useful to 'chunk' things further,” Feucht told TechCrunch. “My guess is that because of all this ambiguity, there will never be a perfect tokenizer.”
This problem becomes even more complicated as law students learn more languages. For example, some tokenization methods assume that a space in a sentence always comes before a new word, but many languages, including Chinese, Japanese, Thai, Lao, Korean, and Khmer, don't use spaces to separate words. In a 2023 study, Yenny Jun, an AI researcher at Google DeepMind, found that some languages require up to 10 times as many tokens as English to convey the same meaning.
“It would probably be best to let the model look at the characters directly without forcing tokenization, but right now that's computationally infeasible for the Transformer,” Feucht says.
Image generators such as Midjourney and DALL-E do not use the transformer architecture behind text generators such as ChatGPT. Instead, image generators typically use diffusion models to reconstruct images from noise. Diffusion models are trained on large image databases and are motivated to try to reproduce images similar to what they learned from the training data.
Image credit: Adobe Firefly
“Image generators tend to perform much better on man-made objects like cars and human faces, but not as well on small objects like fingers and handwriting,” Asmelash Teka Hadgu, co-founder of Lesan and research scientist at the DAIR Institute, told TechCrunch.
This is likely because these fine details are less salient in the training set than concepts like trees usually have green leaves. However, the problems with diffusion models may be easier to solve than those that plague Transformers: some image generators have improved their representation of hands, for example, by training on more images of real human hands.
“Just in the last year, these models have been really bad at recognizing fingers. It's the exact same problem as with text,” Guzdial explains. “Models are getting pretty good at recognizing fingers, so when they see a hand with six or seven fingers, they can say, 'Oh, this looks like a finger.' Similarly, with generated text, they can say, this looks like an 'H,' this looks like a 'P.' But they're really bad at structuring all of this together.”
Image credit: Microsoft Designer (DALL-E 3)
So if you ask an AI image generator to create a menu for a Mexican restaurant, you might see generic menu items like “tacos,” but you're more likely to find menu items like “tamiros,” “enchidars,” and “bruhiltos.”
As these memes about spelling “strawberry” spread across the internet, OpenAI is working on a new AI product, codenamed “Strawberry,” that should offer even better reasoning capabilities. LLM's growth has been limited by the fact that there isn't enough training data in the world to make products like ChatGPT more accurate. But Strawberry can reportedly generate accurate synthetic data to further improve OpenAI's LLM. According to The Information, Strawberry can solve The New York Times Connections word puzzles, which require creative thinking and pattern recognition, and it can also solve never-before-seen mathematical equations.
Meanwhile, Google DeepMind recently launched AlphaProof and AlphaGeometry 2, AI systems designed for formal mathematical reasoning. According to Google, these two systems solved four out of six problems at the International Mathematical Olympiad, good enough to win a silver medal at the prestigious competition.
It's a bit of a prank that OpenAI's Strawberry coverage has coincided with a meme circulating about the AI being unable to spell “strawberry,” but OpenAI CEO Sam Altman jumped at the opportunity to show us that he has some pretty impressive berry crops in his own garden.