There Is an App for That — Until There Isn't

German health insurance will reimburse a mental health app within days but cannot provide a therapist within six months. Last week, psychotherapy fees were cut by 4.5%. Baumol’s cost disease — originally about why string quartets get relatively more expensive — explains why the app gold rush and the collapse of mental health provision are the same phenomenon.

7 April 2026 · 15 min · Sebastian Spicker

The Model Has No Seahorse: Vocabulary Gaps and What They Reveal About LLMs

There is no seahorse emoji in Unicode. Ask a large language model to produce one and watch what happens. The failure is not a hallucination in the ordinary sense — the model knows what it wants to output but cannot output it. That distinction matters.

4 March 2026 · 17 min · Sebastian Spicker

Car Wash, Part Three: The AI Said Walk

A new video went viral last week: same question, “should I drive to the car wash?”, different wrong answer — the AI said to walk instead. This is neither the tokenisation failure from the strawberry post nor the grounding failure from the rainy-day post. It is a pragmatic inference failure: the model understood all the words and (probably) had the right world state, but assigned its response to the wrong interpretation of the question. A third and more subtle failure mode, with Grice as the theoretical handle.

12 February 2026 · 7 min · Sebastian Spicker

Should I Drive to the Car Wash? On Grounding and a Different Kind of LLM Failure

A viral video this month showed an AI assistant confidently answering “should I go to the car wash today?” without knowing it was raining outside. The internet found it funny. The failure mode is real but distinct from the strawberry counting problem — this is not a representation issue, it is a grounding issue. The model understood the question perfectly. What it lacked was access to the state of the world the question was about.

20 January 2026 · 9 min · Sebastian Spicker

The AI Friend That Makes You Lonelier

AI companions promise to address the loneliness epidemic. Daniel Wegner’s ironic process theory predicts they will fail under exactly the conditions where people need them most — and recent data from MIT and OpenAI suggest the prediction is correct.

12 August 2025 · 11 min · Sebastian Spicker

There Is No Blue Pill: The Epistemology of the Red Pill/Blue Pill Choice

The most famous choice in science fiction is epistemically impossible to make rationally. Morpheus offers Neo ’the truth’ but gives him no way to evaluate the offer. Cypher’s decision to go back is more philosophically coherent than the films acknowledge.

15 May 2025 · 13 min · Sebastian Spicker

The Oracle Problem: What The Matrix Got Right About AI Alignment

The Oracle is the most interesting character in The Matrix for anyone who thinks about AI alignment. She systematically lies to Neo for his own good. The films present this as wisdom. I think it is a cautionary tale the Wachowskis didn’t know they were writing.

20 March 2025 · 14 min · Sebastian Spicker

Artificial Intelligence in Music Pedagogy: Curriculum Implications from a Thementag

On 2 December 2024 I gave three workshops at HfMT Köln’s Thementag on AI and music education. The handouts covered data protection, AI tools for students, and AI in teaching. This post is the argument behind them — focused on the curriculum question that none of the tools answer on their own: what should change, and what should not?

7 December 2024 · 14 min · Sebastian Spicker

The Hamiltonian of Intelligence: From Spin Glasses to Neural Networks

On October 8, 2024, Hopfield and Hinton were awarded the Nobel Prize in Physics. The physics community reacted with irritation: is machine learning really physics? The irritation is wrong. The energy function of a Hopfield network is literally the Ising Hamiltonian. The lineage runs from Giorgio Parisi’s disordered iron alloys in 1979 to the model that predicted the structures of 200 million proteins.

21 October 2024 · 19 min · Sebastian Spicker

You Cannot Have All Three: The Fairness Impossibility Theorem

Three natural fairness criteria for an AI classifier — calibration, equal false positive rates, equal false negative rates — cannot all hold simultaneously when base rates differ across groups. This is not an engineering failure. It is a theorem. Choosing which criterion to satisfy is a political decision, not a technical one.

8 March 2024 · 13 min · Sebastian Spicker