Few-Shot Mastery: Teaching by Example
Design few-shot prompts with strategically chosen examples that teach AI your exact output pattern, classification system, or writing style.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
The most reliable way to get AI to produce exactly what you want: show it examples. Few-shot prompting is the technique of including 3-10 input/output examples in your prompt so the AI learns your desired pattern.
🔄 Quick Recall: In the previous lesson, you learned reasoning techniques (CoT, ToT, self-consistency) that improve how AI thinks. Few-shot prompting improves what the AI produces — by showing it the exact pattern you want.
Zero-Shot vs. Few-Shot
Zero-shot (no examples): “Classify this support ticket as Bug, Feature Request, or Question.”
Few-shot (with examples):
<examples>
Input: "The app crashes when I click export"
Category: Bug
Input: "Can you add dark mode?"
Category: Feature Request
Input: "How do I reset my password?"
Category: Question
Input: "Login page shows blank screen on Firefox"
Category: Bug
Input: "Is there a student discount?"
Category: Question
</examples>
<input>The search filter doesn't work when I select multiple tags</input>
The few-shot version is dramatically more reliable because the AI sees exactly what each category looks like — not just the category name.
The Art of Choosing Examples
Not all examples are equal. Strategically chosen examples outperform random ones.
Rule 1: Cover Every Category
For classification, include at least one example per category. Better: 2-3 per category.
Rule 2: Include Edge Cases
The most valuable examples are the ones that could go either way:
<examples>
<!-- Clear cases -->
Input: "I love this product! Best purchase ever!"
Sentiment: Positive
Input: "Terrible experience. I want a refund."
Sentiment: Negative
<!-- Edge cases — these teach the AI WHERE the boundary is -->
Input: "The product works fine, but the packaging was damaged."
Sentiment: Mixed
Input: "Not bad, I guess. Does what it says."
Sentiment: Neutral
Input: "Great concept, but the execution needs work."
Sentiment: Mixed
</examples>
Without edge cases, the AI handles clear-cut inputs well but guesses on ambiguous ones. Edge cases teach the decision boundaries.
Rule 3: Vary Input Characteristics
If all your examples have similar length, style, or complexity, the AI may overfit to those characteristics:
<!-- BAD: All examples are short, simple sentences -->
Input: "I love it" → Positive
Input: "I hate it" → Negative
Input: "It's okay" → Neutral
<!-- GOOD: Varied length, style, and complexity -->
Input: "I love it"
→ Positive
Input: "After using this for three months, I can confidently say
it has transformed how our team collaborates. The learning curve
was steep initially, but worth every minute."
→ Positive
Input: "meh. works but nothing special tbh"
→ Neutral
✅ Quick Check: You built a few-shot prompt for extracting meeting action items. Your 5 examples all show well-formatted meeting notes with clear “ACTION: Person does X by Date” entries. In practice, real meeting notes are messy — typos, unclear assignments, missing dates. Will your prompt work on real data? (Answer: Probably not well. Your examples only show the “clean” case. Add examples with messy input: “john said he’d look into the budget thing maybe next week” → Action: John to review budget by [next week]. Showing the AI how to handle messy, ambiguous input is what makes few-shot prompts work in production.)
Few-Shot for Different Tasks
Style Transfer
Teach the AI your writing voice:
<task>Write a product update email in our company's voice.</task>
<examples>
Topic: New dashboard feature
Our style: "Your dashboard just got smarter. We added real-time
alerts that ping you when metrics move — no more checking
manually. It's live now. Go poke around."
Topic: Pricing change
Our style: "Heads up: we're adjusting pricing starting March 1.
Current plans are locked in until your renewal. Here's what's
changing and why."
Topic: Maintenance window
Our style: "Quick note: we're doing some housekeeping this
Saturday 2-4 AM EST. Things might be spotty for a few minutes.
We'll be done before your coffee kicks in."
</examples>
<topic>New API rate limit increase</topic>
Three examples teach the AI your exact voice: casual, direct, first-person plural, short sentences, no marketing fluff.
Data Extraction
Teach the AI your extraction schema:
<task>Extract structured data from this invoice.</task>
<examples>
Input: "Invoice #1234 from Acme Corp, dated Jan 15 2026.
Items: Web hosting ($99/mo), SSL cert ($29/yr). Total: $128.
Payment due: Feb 15 2026."
Output:
{
"invoice_number": "1234",
"vendor": "Acme Corp",
"date": "2026-01-15",
"items": [
{"description": "Web hosting", "amount": 99, "recurring": "monthly"},
{"description": "SSL cert", "amount": 29, "recurring": "yearly"}
],
"total": 128,
"due_date": "2026-02-15"
}
</examples>
Code Generation
Teach the AI your coding patterns:
<task>Write a Python function following our conventions.</task>
<examples>
Request: "Function to validate email format"
Code:
def validate_email(email: str) -> bool:
"""Validate email format using regex pattern."""
import re
pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
return bool(re.match(pattern, email))
</examples>
<request>Function to calculate percentage change between two numbers</request>
How Many Examples?
| Task Complexity | Recommended Examples | Why |
|---|---|---|
| Simple binary (yes/no) | 2-4 | Just need to show the pattern |
| Multi-class (3-5 categories) | 5-10 | Need diversity across classes |
| Complex extraction | 3-5 detailed | Quality of each example matters more |
| Style transfer | 3-5 | Enough to establish the voice pattern |
| Borderline: ambiguous input | Extra 2-3 edge cases | These are the most valuable examples |
Token budget matters: Each example consumes tokens from your context window. For a task with a 4K context, you have less room for examples than with 100K context. Prioritize example quality over quantity.
Practice Exercise
- Create a few-shot classification prompt with at least 3 categories and 2 examples per category
- Include one edge case per category that tests the boundary
- Try the same task zero-shot vs. few-shot — compare consistency
- Build a style transfer prompt: give 3 examples of your writing voice, then generate a new piece
- Experiment with example order: move your best example to the end and see if output improves
Key Takeaways
- Few-shot prompting (3-10 examples) is the most reliable technique for consistent, patterned output
- Example quality matters more than quantity — diverse, edge-case-including examples outperform many similar ones
- Cover every category, include edge cases, and vary input characteristics to prevent overfitting
- Recency bias: place your most important examples last for maximum influence
- Few-shot works for classification, style transfer, data extraction, code generation, and more
- Token budget: balance example count with available context window size
Up Next
In the next lesson, you’ll learn to build system prompts and role definitions that shape AI behavior at a deeper level — creating reusable AI “agents” for specific tasks.
Knowledge Check
Complete the quiz above first
Lesson completed!