Hallucination
A hallucination occurs when an AI model generates confident-sounding information that is factually incorrect, fabricated, or not supported by its training data.
Hallucination in AI refers to instances where a language model produces outputs that sound plausible and authoritative but are factually wrong, internally inconsistent, or entirely fabricated. This happens because LLMs generate text based on statistical patterns rather than factual retrieval, so they can confidently produce information that follows linguistic patterns but does not correspond to reality.
Hallucinations take several forms. Factual hallucinations involve stating incorrect facts such as wrong dates, fabricated statistics, or invented citations. Contextual hallucinations occur when the model misinterprets the question and provides accurate information about the wrong topic. Logical hallucinations involve flawed reasoning that leads to incorrect conclusions despite each step sounding reasonable. Confabulation is when the model fills in gaps in its knowledge with plausible-sounding but invented details.
Reducing hallucinations is a major focus of AI research and application design. Techniques include grounding responses in retrieved documents through RAG, asking models to express uncertainty, implementing fact-checking pipelines, and using chain-of-thought reasoning to make the model's logic transparent. For developers building AI applications, it is essential to design systems that verify AI outputs against trusted sources rather than assuming correctness.
Real-World Examples
- •An AI chatbot citing a research paper that does not exist with a fabricated author and journal
- •A coding assistant suggesting an API method that was never part of the library
- •An AI generating a confident biography with incorrect dates and fabricated career details
- •A legal AI producing case citations that combine real case names with wrong outcomes