Output Formats
Learn how to get AI to deliver responses in exactly the format you need—tables, lists, code, JSON, and more.
Recall: Context and Few-Shot Prompting
In Lesson 4, you learned that showing AI examples (few-shot prompting) is often more effective than explaining. This principle is especially powerful when it comes to output formats.
By the end of this lesson, you’ll be able to:
- Request specific output formats confidently
- Choose the right format for different tasks
- Use templates to get consistent, usable outputs
Why Format Matters
Imagine asking AI to summarize a document. It gives you three paragraphs of prose.
But you needed bullet points for a presentation slide.
Now you’re reformatting manually—exactly the kind of work AI should save you from.
The fix: Specify your format upfront.
Common Output Formats
Here are the formats you’ll use most often:
1. Bullet Points / Lists
Best for: Summaries, quick reference, scanning
Summarize this article as 5-7 bullet points,
each one sentence max.
2. Numbered Lists
Best for: Steps, rankings, sequences
Give me a 10-step plan to launch this product,
numbered 1-10 in order of execution.
3. Tables
Best for: Comparisons, structured data, analysis
Compare these 3 options in a table with columns for:
Name, Cost, Pros, Cons, Best For
4. Headers and Sections
Best for: Long documents, reports, reference material
Write this report with clear headers:
## Executive Summary
## Key Findings
## Recommendations
## Next Steps
5. Code
Best for: Programming, automation, technical tasks
Write a Python function that [does X].
Include comments explaining each section.
6. JSON / Structured Data
Best for: When output feeds into other systems
Return the results as JSON with this structure:
{
"name": "string",
"score": number,
"tags": ["array", "of", "strings"]
}
The Format Formula
Here’s a reliable pattern for specifying format:
“Present your response as [format] with [specific structure]. Each [item] should include [requirements].”
Examples:
For a comparison:
“Present your response as a table with columns for Feature, Tool A, Tool B, and Winner. Each row should cover one key feature.”
For action items:
“Present your response as a numbered list. Each item should start with an action verb and include an owner and deadline.”
For analysis:
“Present your response with these sections: Summary (2-3 sentences), Key Points (bullet list), Recommendations (numbered). Use headers for each section.”
Format Templates You Can Copy
Template: Meeting Notes → Action Items
Convert these meeting notes into action items.
Format each action item as:
- [ ] [Action verb + task] | Owner: [name] | Due: [date]
Group by project or topic with headers.
Only include items that require someone to DO something.
Meeting notes:
[paste notes here]
Template: Comparison Analysis
Compare [A] vs [B] vs [C].
Create a table with these columns:
| Criterion | [A] | [B] | [C] | Notes |
Criteria to compare:
1. [criterion 1]
2. [criterion 2]
3. [criterion 3]
After the table, add a "Bottom Line" section with
a 2-sentence recommendation.
Template: Document Summary
Summarize this document in three formats:
1. ONE SENTENCE: The core message in one sentence
2. TWEET: Under 280 characters, conversational
3. EXECUTIVE SUMMARY: 3-5 bullet points with key takeaways
Document:
[paste document here]
Matching Format to Task
| Task | Best Format | Why |
|---|---|---|
| Quick update to your boss | Bullet points | Easy to scan |
| Comparing vendors | Table | Side-by-side comparison |
| Process documentation | Numbered steps | Clear sequence |
| Code review feedback | Sectioned with headers | Organized by concern |
| Data for a spreadsheet | CSV or table | Direct paste |
| Data for an app | JSON | Machine-readable |
| Presentation content | Slides with titles + bullets | Presentation-ready |
Quick Check
You need AI to analyze customer feedback and identify themes. Which format request would work best?
A) “Analyze this feedback”
B) “Analyze this feedback and present as: 1) Top 5 themes as a bullet list with frequency counts, 2) Notable quotes table with columns for Quote, Theme, and Sentiment, 3) Summary of recommended actions”
(Answer: B gives AI a clear, useful structure to fill)
Common Format Mistakes
Mistake 1: No format specified You get whatever AI feels like generating. Fix: Always specify format for anything beyond simple Q&A.
Mistake 2: Conflicting instructions “Keep it brief but comprehensive with lots of detail.” Fix: Be clear about priorities—length OR depth.
Mistake 3: Vague format words “Make it nice” or “format it well” means nothing. Fix: Specify exactly what “nice” means—bullets? Tables? Headers?
Mistake 4: Forgetting who’s using it Technical JSON when you needed plain text. Fix: Think about your downstream use before requesting format.
Practical Exercise
Take this messy output request and add format specifications:
Before:
“Give me ideas for our team offsite”
Your improved version: (Consider: How many ideas? What format? What details per idea?)
Example answer:
“Give me 8 ideas for our team offsite. Present as a numbered list. For each idea include: Activity name, Time required, Team size it works for (use ranges like 5-10), and One-sentence description. Group ideas by type: Indoor vs Outdoor.”
Key Takeaways
- Specify format upfront to avoid reformatting work
- Match format to task: tables for comparison, bullets for scanning, sections for reports
- Use templates for consistent, reusable outputs
- Show examples when the format is complex
Up Next
Lesson 6 covers the common mistakes that derail AI interactions—and how to fix them. You’ll learn to diagnose why prompts fail and what to do about it.
Knowledge Check
Complete the quiz above first
Lesson completed!