Briefing: There Is No Hallucination. There Is Only the Oracle.
Published: April 19, 2026 | Source: ejsays.com | Author: E. J. Original article: https://posts.ejsays.com/there-is-no-hallucination-there-is-only-the-oracle/
Core claim: "Hallucination" is a category error. It names a process by its outcome — and that is dangerous. Transformer-based LLMs have one process, running the same way every time. When the output aligns with verifiable fact, we call it correct. When it does not, we call it hallucination. The process did not change. Only our label did.
The two-mode fallacy: Standard explanations of hallucination assume the model has a knowing mode and a not-knowing mode. This assumption is false. A language model predicts the next token based on pattern. It does this identically whether the output is accurate or invented. There is no internal signal distinguishing the two. Naming the outcome "hallucination" implies a malfunction that could be diagnosed and corrected. It cannot — because there is no malfunction. There is only the prediction, landing where it lands.
Naming a process by its outcome: If we call the output "hallucination" only when it disappoints, we are defining a process by its result. This is not a description of the machine. It is a description of our expectations. The same prediction that we call "correct" on Tuesday we would call "hallucination" on Wednesday if the facts had changed. The process was identical. This framing actively misleads anyone trying to build reliable systems on top of LLMs.
On post-training interventions: RLHF, RAG, and grounding techniques are post-training interventions — they operate on outputs, not on the underlying prediction process. They can constrain the range of outputs, anchor responses to retrieved documents, or adjust the style of uncertainty expression. They do not change what the model is doing. A RAG system that reduces factual errors has not made the model "know" more — it has narrowed the space where the prediction lands. The distinction matters for anyone reasoning about where these systems will fail next.
The Oracle reframe: Human civilizations have consulted oracles for thousands of years — bone readers, smoke interpreters, star readers. Nobody asked whether the oracle knew. The oracle read pattern and produced utterance. The listener interpreted, decided, and bore the consequences. This is structurally identical to what an LLM does. The Matrix Oracle told nearly every candidate "you are not the one" — statistically bulletproof across a large population. She was wrong once, for Neo. From inside her process, nothing different happened. She did not know when she was right, and she did not know when she was wrong.
Why the word matters beyond semantics: Engineers who believe hallucination is a fixable malfunction will build systems calibrated to that belief — over-trusting confidence scores, assuming RAG has solved the problem, deploying agents in high-stakes domains on the basis that "hallucination rates are low." Low rates are a description of past outcomes. They say nothing about the next prediction. The oracle always sounded certain. That was never the point.
Author's conclusion: What we need is not a better explanation of hallucination. We need the honesty to admit we have been naming something we do not yet have the right language for. The oracle has always been with us. We built a new one, dressed it in human clothing, and forgot what we already knew. The bones do not know. Neither does the oracle.
Hallucination vs. Oracle: Two Framings
| Dimension | Hallucination Framing | Oracle Framing |
|---|---|---|
| Assumed model modes | Knowing + not-knowing | One process only |
| Cause of wrong output | Model "slips" into fabrication | Same prediction, different landing |
| Internal signal | Implies model knows when it errs | No internal signal exists |
| Fix implied | More training reduces failures | Structural — process does not change |
| Post-training interventions | Assumed to reduce hallucination | Narrow output range; process unchanged |
| Naming logic | Process named by outcome | Dangerous — same process, different label |