Car Wash, Part Three: The AI Said Walk

A new video went viral last week: same question, “should I drive to the car wash?”, different wrong answer — the AI said to walk instead. This is neither the tokenisation failure from the strawberry post nor the grounding failure from the rainy-day post. It is a pragmatic inference failure: the model understood all the words and (probably) had the right world state, but assigned its response to the wrong interpretation of the question. A third and more subtle failure mode, with Grice as the theoretical handle.

12 February 2026 · 7 min · Sebastian Spicker

Should I Drive to the Car Wash? On Grounding and a Different Kind of LLM Failure

A viral video this month showed an AI assistant confidently answering “should I go to the car wash today?” without knowing it was raining outside. The internet found it funny. The failure mode is real but distinct from the strawberry counting problem — this is not a representation issue, it is a grounding issue. The model understood the question perfectly. What it lacked was access to the state of the world the question was about.

20 January 2026 · 9 min · Sebastian Spicker