AI just told you something completely wrong. With total confidence.
Maybe it cited a study that doesn’t exist. Made up a statistic. Invented a quote. Confidently explained a feature your product doesn’t have.
This is called hallucination, and it’s one of the most frustrating things about working with AI.
The bad news: you can’t eliminate it completely. AI models generate text by predicting what sounds right, not by checking facts.
The good news: you can dramatically reduce it. Certain prompting techniques have been shown to cut hallucination rates by 35% or more—and combining multiple techniques can push that even higher.
Here’s what actually works.
Why AI Hallucinates
Understanding the problem helps fix it.
AI doesn’t “know” things the way humans do. It predicts the most likely next word based on patterns in its training data. When you ask a question, it generates an answer that sounds plausible—whether or not it’s true.
This is why AI hallucinates more when:
- The topic is obscure or recent (less training data)
- You ask for specific details (dates, numbers, names)
- The question is ambiguous (multiple plausible answers)
- You push it to answer when it’s uncertain
The model would rather guess confidently than admit it doesn’t know. Your job is to create conditions where guessing is less likely.
Technique 1: Give Permission to Say “I Don’t Know”
AI defaults to providing an answer, even when it shouldn’t. Explicitly tell it uncertainty is okay.
Add to your prompt:
If you're not certain about something, say so. It's better to
say "I'm not sure" than to guess. I'd rather have no answer
than a wrong answer.
This simple addition changes the incentive. Instead of optimizing for “give an answer,” AI optimizes for “give an accurate answer or acknowledge uncertainty.”
Research from Claude’s documentation shows this alone significantly reduces hallucination on factual questions.
Technique 2: Ask for Sources and Quotes
When AI has to cite its sources, it’s more careful about claims.
For documents:
Based only on the document I've provided, answer this question.
Quote the specific passages that support your answer. If the
document doesn't contain the information, say so.
For general knowledge:
Explain X. For any specific claims, note whether you're confident
in the information or if it should be verified.
The act of citing forces AI to ground responses in something concrete rather than generating plausible-sounding fiction.
Technique 3: Break Down Complex Questions
AI hallucinates more on complex, multi-part questions. The model tries to handle everything at once and fills gaps with guesses.
Instead of:
Tell me about the history, key features, pricing, and customer
reviews of ProductX, and compare it to ProductY and ProductZ.
Try:
Let's break this down step by step.
First: What are the key features of ProductX?
Then follow up with separate questions for each part. You’ll get more accurate answers and can catch errors earlier.
Technique 4: Use Chain-of-Thought Prompting
Asking AI to show its reasoning dramatically improves accuracy on tasks that require logic.
Add:
Think through this step by step before giving your final answer.
Show your reasoning.
When AI has to articulate each step, it’s more likely to catch its own errors. A 2024 study found chain-of-thought prompting reduced mathematical errors by 28% in GPT-4.
This works because hallucinations often happen when AI skips steps and jumps to conclusions. Forcing explicit reasoning closes those gaps.
Technique 5: Constrain the Scope
The broader the question, the more room for hallucination. Narrow it down.
Broad (risky):
Tell me about machine learning.
Narrow (safer):
Explain the difference between supervised and unsupervised learning.
Keep it to 3-4 sentences. Focus on the key distinction, not all the details.
Shorter, focused responses have less opportunity for AI to venture into uncertain territory and start making things up.
Technique 6: Provide Reference Material
If you need AI to work with specific information, give it that information. Don’t rely on what it “knows.”
For factual tasks:
Here is the product documentation:
[paste documentation]
Based only on this documentation, answer the customer's question:
[question]
Do not add information that isn't in the documentation.
For analysis:
Here is the data:
[paste data]
Analyze only what's in this data. Don't make assumptions about
information that isn't included.
When AI has source material to work from, it’s much less likely to invent things.
Technique 7: Ask It to Verify Its Own Answer
This is surprisingly effective. After AI gives an answer, ask it to check for accuracy.
Follow-up prompt:
Now review your answer. Are there any claims you made that might
be inaccurate or that you're uncertain about? Point them out.
AI often catches its own hallucinations when explicitly asked to look for them. This self-verification step adds a layer of quality control.
For critical tasks, you can even structure this into your original prompt:
Answer the question, then review your response for accuracy.
Flag anything you're less than confident about.
Combining Techniques
No single technique eliminates hallucinations. But combining them multiplies the effect.
Here’s a prompt template that incorporates multiple techniques:
I'm going to ask you about [topic].
Guidelines:
1. Only use information you're confident about
2. If you're uncertain, say "I'm not certain" rather than guessing
3. For specific claims, note your confidence level
4. Think through your answer step by step
5. After answering, briefly note any parts that should be verified
My question: [your question]
This prompt:
- Gives permission to express uncertainty (Technique 1)
- Asks for confidence levels (Technique 2)
- Encourages step-by-step reasoning (Technique 4)
- Includes self-verification (Technique 7)
What You Can’t Prevent
Let’s be realistic about limitations.
AI will still hallucinate sometimes, especially about:
- Recent events (after training cutoff)
- Obscure details (specific dates, niche statistics, minor figures)
- Technical specifics (exact API parameters, code syntax details)
- Quotes and citations (it often fabricates these)
For anything where accuracy is critical:
- Verify independently
- Don’t rely on AI for citations without checking them
- Use AI for drafts and ideas, not final facts
The Verification Mindset
The ultimate solution isn’t a prompt technique—it’s a mindset shift.
Treat AI output as a first draft that needs verification, not a final answer. Use AI to:
- Generate ideas quickly
- Draft content you’ll review
- Explore possibilities
- Summarize material you’ve provided
Don’t use AI as an oracle that knows things you don’t. It doesn’t. It predicts what sounds right. Sometimes it’s wrong.
With the techniques above, you’ll get wrong answers much less often. But “less often” isn’t “never.”
Trust, but verify.
Quick Reference: The 7 Techniques
- Give permission to say “I don’t know” — Explicit instruction that uncertainty is okay
- Ask for sources and quotes — Ground responses in specific evidence
- Break down complex questions — Handle one part at a time
- Use chain-of-thought — “Think step by step” before answering
- Constrain the scope — Narrower questions, shorter answers
- Provide reference material — Give it the facts to work from
- Ask for self-verification — Have AI check its own answer
Use them individually for quick improvements. Combine them for maximum reliability.
Your AI just got a lot more trustworthy.