Vocabulary Debt
You've noticed that AI "hallucination" debates go in circles. The word itself is the reason.
When LLMs started producing false outputs, someone coined "hallucination" as a metaphor. It stuck. But now, years later, the metaphor has become a tax on the entire conversation.
Drew Breunig noticed it first: "People are still asking 'how do I get hallucination rates down?' and they're thinking about it in terms of the way it was introduced, not the way we deal with it today."
A single word choice, made early, now shapes how an entire industry frames the problem.
This is how language compounds.
At introduction, "hallucination" was clarifying. It distinguished false outputs from bugs. It suggested the model was "seeing things." A useful intuition for a novel failure mode.
But metaphors have implications. "Hallucination" implies something wrong with the model's perception. It frames the problem as pathology to be cured. It suggests we need to make the model see more clearly.
What if false outputs aren't hallucinations but confabulations? The model filling gaps with plausible fiction because that's what language models do.
Different frame, different solutions.
This is The Label Problem in action. The word "hallucination" isn't describing the problem anymore. It's defining how we're allowed to think about it.
The danger isn't picking the wrong word. It's that words compound. A metaphor that works at introduction becomes a cognitive cage at scale.
The tax gets paid every time someone asks the wrong question because the vocabulary only permits wrong questions.
Go deeper: The Marketing Flip