Third in an accidental series. Part one: Three Rs in Strawberry — tokenisation and representation. Part two: Should I Drive to the Car Wash? — grounding and missing world state. This one is different again.


The Video

Same question as last month’s: “Should I drive to the car wash?” New video, new AI, new wrong answer. This time the assistant replied that walking was the better option — better for health, better for the environment, and the car wash was only fifteen minutes away on foot.

Accurate, probably. Correct, arguably. Useful? No.

The model did not fail because of tokenisation. It did not fail because it lacked access to the current weather. It failed because it read the wrong question. The user was asking “is now a good time to have my car washed?” The model answered “what is the most sustainable way for a human to travel to the location of a car wash?”

These are different questions. The model chose the second one. This is a pragmatic inference failure, and it is the most instructive of the three failure modes in this series — because the model was not, by any obvious measure, working incorrectly. It was working exactly as designed, on the wrong problem.


What the Question Actually Meant

“Should I drive to the car wash?” is not about how to travel. The word “drive” here is not a transportation verb; it is part of the idiomatic compound “drive to the car wash,” which means “take my car to get washed.” The presupposition of the question is that the speaker owns a car, the car needs or might benefit from washing, and the speaker is deciding whether the current moment is a good one to go. Nobody asking this question wants to know whether cycling is a viable alternative.

Linguists distinguish between what a sentence says — its literal semantic content — and what it implicates — the meaning a speaker intends and a listener is expected to infer. Paul Grice formalised this in 1975 with a set of conversational maxims describing how speakers cooperate to communicate:

  • Quantity: say as much as is needed, no more
  • Quality: say only what you believe to be true
  • Relation: be relevant
  • Manner: be clear and orderly

The maxims are not rules; they are defaults. When a speaker says “should I drive to the car wash?”, a cooperative listener applies the maxim of Relation to infer that the question is about car maintenance and current conditions, not about personal transport choices. The “drive” is incidental to the real question, the way “I ran to the store” does not invite commentary on jogging technique.

The model violated Relation — in the pragmatic sense. Its answer was technically relevant to one reading of the sentence, and irrelevant to the only reading a cooperative human would produce.


A Taxonomy of the Three Failures

It is worth being precise now that we have three examples:

Strawberry (tokenisation failure): The information needed to answer was present in the input string but lost in the model’s representation. “Strawberry” →

\["straw", "berry"\]

— the character “r” in “straw” is not directly accessible. The model understood the task correctly; the representation could not support it.

Car wash, rainy day (grounding failure): The model understood the question. The information needed to answer correctly — current weather — was never in the input. The model answered by averaging over all plausible contexts, producing a sensible-on-average response that was wrong for this specific context.

Car wash, walk (pragmatic inference failure): The model had all the relevant words. It may have had access to the weather, the location, the car state. It chose the wrong interpretation of what was being asked. The sentence was read at the level of semantic content rather than communicative intent.

Formally: let $\mathcal{I}$ be the set of plausible interpretations of an utterance $u$. The intended interpretation $i^*$ is the one a cooperative, contextually informed listener would assign. A well-functioning pragmatic reasoner computes:

$$i^* = \arg\max_{i \in \mathcal{I}} \; P(i \mid u, \text{context})$$

The model appears to have assigned high probability to the transportation-choice interpretation $i_{\text{walk}}$, apparently on the surface pattern: “should I

\[verb of locomotion\]

to

\[location\]

?” generates responses about modes of transport. It is a natural pattern-match. It is the wrong one.


Why This Failure Mode Is More Elusive

The tokenisation failure has a clean diagnosis: look at the BPE splits, find where the character information was lost. The grounding failure has a clean diagnosis: identify the context variable $C$ the answer depends on, check whether the model has access to it.

The pragmatic failure is harder to pin down because the model’s answer was not, in isolation, wrong. Walking is healthy. Walking to a car wash that is fifteen minutes away is plausible. If you strip the question of its conversational context — a person standing next to their dirty car, wondering whether to bother — the model’s response is coherent.

The error lives in the gap between what the sentence says and what the speaker meant, and that gap is only visible if you know what the speaker meant. In a training corpus, this kind of error is largely invisible: there is no ground truth annotation that marks a technically-responsive answer as pragmatically wrong.

This is a version of a known problem in computational linguistics: models trained on text predict text, and text does not contain speaker intent. A model can learn that “should I drive to X?” correlates with responses about travel options, because that correlation is present in the data. What it cannot easily learn from text alone is the meta-level principle: this question is about the destination’s purpose, not the journey.


The Gricean Model Did Not Solve This

It is tempting to think that if you could build in Grice’s maxims explicitly — as constraints on response generation — you would prevent this class of failure. Generate only responses that are relevant to the speaker’s probable intent, not just to the sentence’s semantic content.

This does not obviously work, for a simple reason: the maxims require a model of the speaker’s intent, which is exactly what is missing. You need to know what the speaker intends to know which response is relevant; you need to know which response is relevant to determine the speaker’s intent. The inference has to bootstrap from somewhere.

Human pragmatic inference works because we come to a conversation with an enormous amount of background knowledge about what people typically want when they ask particular kinds of questions, combined with contextual cues (tone, setting, previous exchanges) that narrow the interpretation space. A person asking “should I drive to the car wash?” while standing next to a mud-spattered car in a conversation about weekend plans is not asking for a health lecture. The context is sufficient to fix the interpretation.

Language models receive text. The contextual cues that would fix the interpretation for a human — the mud on the car, the tone of the question, the setting — are not available unless someone has typed them out. The model is not in the conversation; it is receiving a transcript of it, from which the speaker’s intent has to be inferred indirectly.


Where This Leaves the Series

Three videos, three failure modes, three diagnoses. None of them are about the model being unintelligent in any useful sense of the word. Each of them is a precise consequence of how these systems work:

  1. Models process tokens, not characters. Character-level structure can be lost at the representation layer.
  2. Models are trained on static corpora and have no real-time connection to the world. Context-dependent questions are answered by marginalising over all plausible contexts, which is wrong when the actual context matters.
  3. Models learn correlations between sentence surface forms and response types. The correlation between “should I \[travel verb\]to \[place\]?” and transport-related responses is real in the training data. It is the wrong correlation for this question.

The useful frame, in all three cases, is not “the model failed” but “what, precisely, does the model lack that would be required to succeed?” The answers point in different directions: better tokenisation; real-time world access and calibrated uncertainty; richer models of speaker intent and conversational context. The first is an engineering problem. The second is partially solvable with tools and still hard. The third is unsolved.


References

  • Grice, P. H. (1975). Logic and conversation. In P. Cole & J. Morgan (Eds.), Syntax and Semantics, Vol. 3: Speech Acts (pp. 41–58). Academic Press.

  • Levinson, S. C. (1983). Pragmatics. Cambridge University Press.