ABCDEFGHIJKLMNOPQRSTUVWXYZ
GLOSSARY

Hallucination

DEFINITION

When an AI language model generates confident-sounding information that is factually incorrect, fabricated, or nonsensical. Hallucinations are a fundamental risk in any AI system that generates text.

Hallucinations occur because LLMs are trained to generate plausible-sounding text, not necessarily accurate text. The model does not "know" facts in the way humans do — it predicts likely token sequences based on patterns in training data.

Hallucination mitigation strategies: (1) RAG — ground responses in retrieved facts. (2) Constitutional AI and RLHF — train models to say "I don't know" rather than fabricate. (3) Structured output with citations. (4) Temperature reduction — lower temperature = less creativity = fewer hallucinations.

Claude tends to hallucinate less than GPT models on factual recall tasks, and is more likely to express uncertainty when it doesn't know something.

Tools That Use Hallucination

C
Claude
9.4/10

Anthropic's AI assistant with industry-leading reasoning and safety

Free / $20/mo Pro / API from $3/M tokensView Review →
P
Perplexity AI
8.5/10

AI-powered search engine with cited, real-time answers

Free / $20/mo ProView Review →