When an AI language model generates confident-sounding information that is factually incorrect, fabricated, or nonsensical. Hallucinations are a fundamental risk in any AI system that generates text.
Hallucinations occur because LLMs are trained to generate plausible-sounding text, not necessarily accurate text. The model does not "know" facts in the way humans do — it predicts likely token sequences based on patterns in training data.
Hallucination mitigation strategies: (1) RAG — ground responses in retrieved facts. (2) Constitutional AI and RLHF — train models to say "I don't know" rather than fabricate. (3) Structured output with citations. (4) Temperature reduction — lower temperature = less creativity = fewer hallucinations.
Claude tends to hallucinate less than GPT models on factual recall tasks, and is more likely to express uncertainty when it doesn't know something.
Anthropic's AI assistant with industry-leading reasoning and safety
AI-powered search engine with cited, real-time answers
Weekly AI tool reviews, news digests, and how-to guides.
Join 12,000+ builders