The Setup
In September 2024, OpenAI publicly confirmed that their new reasoning model had been code-named “Strawberry” during development. This landed with a particular thud because “how many r’s are in strawberry?” had, by that point, become one of the canonical demonstrations of language model failure. The model named after strawberry could not count the letters in strawberry. The internet had opinions.
Before the opinions: the answer is three. s-t-r-a-w-b-e-r-r-y. One in the str- cluster, two in the -rry ending. Count carefully and you will find that most people get this right on the first try, and most large language models get it wrong, returning “two” with apparent confidence.
The question worth asking is not “why is the model stupid.” It is not stupid, and “stupid” is not a useful category here. The question is: what does this specific error reveal about the structure of the system?
The answer involves tokenisation, and it is actually interesting.
How You Count Letters (and How the Model Doesn’t)
When you count the r’s in “strawberry,” you do something like this: scan the string left to right, maintain a running count, increment it each time you see the target character. This is a sequential operation over a character array. It requires no semantic knowledge about the word — it does not matter whether “strawberry” is a fruit, a colour, or a nonsense string. The characters are the input; the count is the output.
A language model does not receive a character array. It receives a sequence of tokens — chunks produced by a compression algorithm called Byte Pair Encoding (BPE) that the model was trained with. In the tokeniser used by GPT-class models, “strawberry” is most likely split as:
$$\underbrace{\texttt{str}}_{\text{token 1}} \;\underbrace{\texttt{aw}}_{\text{token 2}} \;\underbrace{\texttt{berry}}_{\text{token 3}}$$Three tokens. The model’s input is these three integer IDs, each looked up in an embedding table to produce a vector. There is no character array. There is no letter “r” sitting at a known position. There are three dense vectors representing “str,” “aw,” and “berry.”
What BPE Does (and Doesn’t) Preserve
BPE is a greedy compression algorithm. Starting from individual bytes, it iteratively merges the most frequent pair of adjacent symbols into a single new token:
$$\text{merge}(a, b) \;:\; \underbrace{a \;\; b}_{\text{separate}} \;\longrightarrow\; \underbrace{ab}_{\text{single token}}$$Applied to a large text corpus until a fixed vocabulary size is reached, this produces a vocabulary of common subwords. Frequent words and common word-parts become single tokens; rare sequences stay as multi-token fragments.
What BPE optimises for is compression efficiency, not character-level transparency. The token “straw” encodes the sequence s-t-r-a-w as a unit, but that character sequence is not explicitly represented anywhere inside the model once the embedding lookup has occurred. The model receives a vector for “straw,” not a list of its constituent letters.
The character composition of a token is only accessible to the model insofar as it was implicitly learned during training — through seeing “straw” appear in contexts where its internal structure was relevant. For most tokens, most of the time, that character structure was not relevant. The model learned what “straw” means, not how to spell it character by character.
Why the Error Is Informative
Most people say the model returns “two r’s,” not “one” or “four” or “none.” This is not random noise. It is a systematic error, and systematic errors are diagnostic.
“berry” contains two r’s: b-e-r-r-y. If you ask most models “how many r’s in berry?” they get it right. The model has seen that question, or questions closely enough related, that the right count is encoded somewhere in the weight structure.
“str” contains one r: s-t-r. But as a token it is a short, common prefix that appears in hundreds of words — string, strong, stream — contexts in which its internal letter structure is rarely attended to. “aw” contains no r’s. When the model answers “two,” it is almost certainly counting the r’s in “berry” correctly and failing to notice the one in “str.” The token boundaries are where the error lives.
This is not stupidity. It is a precise failure mode that follows directly from the tokenisation structure. You can predict where the error will occur by looking at the token split.
Chain of Thought Partially Fixes This (and Why)
If you prompt the model to “spell out the letters first, then count,” the error rate drops substantially. The reason is not mysterious: forcing the model to generate a character-by-character expansion — s, t, r, a, w, b, e, r, r, y — puts the individual characters into the context window as separate tokens. Now the model is not working from “straw” and “berry”; it is working from ten single-character tokens, and counting sequential characters in a flat list is a task the model handles much better.
This is, in effect, making the model do manually what a human does automatically: convert the compressed token representation back to an enumerable character sequence before counting. The cognitive work is the same; the scaffolding just has to be explicit.
The Right Frame
The “how many r’s” test is sometimes cited as evidence that language models don’t “really” understand text, or that they are sophisticated autocomplete engines with no genuine knowledge. These framing choices produce more heat than light.
The more precise statement is this: language models were trained to predict likely next tokens in large text corpora. That training objective produces a system that is very good at certain tasks (semantic inference, translation, summarisation, code generation) and systematically bad at others (character counting, exact arithmetic, precise spatial reasoning). The system is not doing what you are doing when you read a sentence. It is doing something different, which happens to produce similar outputs for a very wide range of inputs — and different outputs for a class of inputs where the character-level structure matters.
“Strawberry” sits squarely in that class. The model is not failing to read the word. It is succeeding at predicting what a plausible-sounding answer looks like, based on a compressed representation that does not preserve the information needed to get the count right. Those are not the same thing, and the distinction is worth keeping clear.
The tokenisation argument here is a simplified version. Real BPE vocabularies, positional encodings, and the specific way character information is or isn’t preserved in embedding tables are more complicated than this post suggests. But the core point — that the model’s input representation is not a character array and never was — holds.
A follow-up post covers a structurally different failure mode: Should I Drive to the Car Wash? — where the model understood the question perfectly but lacked access to the world state the question was about.
References
Gage, P. (1994). A new algorithm for data compression. The C Users Journal, 12(2), 23–38.
Sennrich, R., Haddow, B., & Birch, A. (2016). Neural machine translation of rare words with subword units. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), 1715–1725. https://arxiv.org/abs/1508.07909
Changelog
- 2025-12-01: Corrected the tokenisation of “strawberry” from two tokens (
straw|berry) to three tokens (str|aw|berry), matching the actual cl100k_base tokeniser used by GPT-4. The directional argument (token boundaries obscure character-level information) is unchanged; the specific analysis was updated accordingly.