The Model Has No Seahorse: Vocabulary Gaps and What They Reveal About LLMs

There is no seahorse emoji in Unicode. Ask a large language model to produce one and watch what happens. The failure is not a hallucination in the ordinary sense — the model knows what it wants to output but cannot output it. That distinction matters.

4 March 2026 · 17 min · Sebastian Spicker

The Oracle Problem: What The Matrix Got Right About AI Alignment

The Oracle is the most interesting character in The Matrix for anyone who thinks about AI alignment. She systematically lies to Neo for his own good. The films present this as wisdom. I think it is a cautionary tale the Wachowskis didn’t know they were writing.

20 March 2025 · 14 min · Sebastian Spicker

Three Rs in Strawberry: What the Viral Counting Test Actually Reveals

In September 2024, OpenAI revealed that its new o1 model had been code-named “Strawberry” internally — the same word that language models have famously been unable to count letters in. The irony was too perfect to pass up. But the counting failure is not a sign that LLMs are naive or broken. It is a precise, informative symptom of how they process text. Here is the actual explanation, with a minimum of hand-waving.

7 October 2024 · 6 min · Sebastian Spicker