Common Mistakes
Learn to identify and fix the most common prompting mistakes that lead to poor AI responses.
Recall: Format and Structure
You’ve learned to specify formats (Lesson 5), provide context (Lesson 4), and structure prompts with RTCF (Lesson 3). Now let’s look at what goes wrong—and how to fix it.
By the end of this lesson, you’ll be able to:
- Identify the 7 most common prompting mistakes
- Diagnose why a prompt isn’t working
- Apply specific fixes to improve responses
Mistake #1: Being Too Vague
The problem: “Help me write better” or “Improve this”
Why it fails: AI doesn’t know what “better” means to you. Clearer? Shorter? More formal? More persuasive?
The fix: Be specific about what “better” means.
| Vague | Specific |
|---|---|
| “Make it better” | “Make it more concise—cut 50% of words” |
| “Improve this email” | “Make this email more professional and remove casual language” |
| “Help with my resume” | “Rewrite my job descriptions using action verbs and quantified achievements” |
Mistake #2: No Context
The problem: “How should I respond to this?”
Why it fails: AI has no idea about your relationship, history, goals, or constraints.
The fix: Load context before asking.
“My manager sent this email (pasted below). Some context: I’m up for promotion next quarter, she tends to be direct but appreciates pushback when I disagree, and this project is behind schedule due to resource constraints she knows about. How should I respond?”
Mistake #3: Assuming AI Knows What You Know
The problem: Using jargon, referencing internal projects, or assuming shared knowledge.
Why it fails: AI has no knowledge of your company, your project names, or your internal terminology.
The fix: Explain terms and provide background.
“We’re working on Project Phoenix (our internal name for the customer onboarding redesign). The current onboarding takes 14 days average; we’re targeting 3 days. Help me draft a progress update for stakeholders.”
Mistake #4: Asking AI to Verify Its Own Facts
The problem: “Make sure everything in this response is accurate.”
Why it fails: AI doesn’t have a fact-checking system. It generates plausible text, not verified truth. Asking it to verify is like asking someone who confidently guesses to verify their guesses.
The fix: Fact-check important claims yourself.
“Generate a draft comparing these three project management tools. I’ll verify the specific feature claims and pricing myself—focus on the comparison structure.”
Mistake #5: One Giant Prompt
The problem: 2,000-word prompt trying to accomplish everything at once.
Why it fails: Complex prompts lead to confused responses. AI may address some parts while ignoring others.
The fix: Break it down. Do one thing at a time, then build.
Instead of: “Write a complete marketing strategy with competitor analysis, positioning, messaging, channels, budget, and timeline”
Try:
- “First, analyze our main competitors: [list]. Focus on their positioning and messaging.”
- “Based on that analysis, suggest our positioning and key differentiators.”
- “Now develop messaging for each differentiator.”
- “Recommend marketing channels for our target audience and budget.”
- “Create a timeline for execution.”
Quick check: Before moving on, can you recall the key concept we just covered? Try to explain it in your own words before continuing.
Mistake #6: Accepting the First Response
The problem: Taking whatever AI generates without iteration.
Why it fails: First responses are rarely optimal. They’re starting points, not final products.
The fix: Iterate. Give feedback. Refine.
“This is a good start. Now make it 30% shorter, add a specific example in paragraph 2, and make the call to action more urgent.”
Think of AI as a collaborator in a revision process, not a vending machine that dispenses final products.
Mistake #7: Wrong Tool for the Job
The problem: Using AI for tasks it’s bad at.
Why it fails: AI has structural limitations. Forcing it into unsuitable tasks wastes time and creates errors.
The fix: Know what AI is (and isn’t) good for.
| Don’t Use AI For | Use AI For Instead |
|---|---|
| Real-time information | Summarizing information you provide |
| Complex math/counting | Explaining concepts or writing about math |
| Factual claims you can’t verify | Drafting content you’ll fact-check |
| Replacing expert judgment | Supporting your decision-making |
| One-shot perfect output | Iterative drafts and refinement |
The Debugging Framework
When a prompt isn’t working, ask these questions:
- Was the task clear? Could someone else understand exactly what I wanted?
- Did I provide enough context? Does AI have the background it needs?
- Did I specify format? Did I tell AI how to structure the output?
- Did I show examples? For complex patterns, did I demonstrate what I want?
- Am I asking for too much at once? Should I break this into steps?
- Is this the right tool? Is AI actually suited for this task?
Quick Diagnosis Exercise
Here’s a failed prompt. What’s wrong and how would you fix it?
Prompt: “Write a response”
AI Response: A generic paragraph about the importance of responding thoughtfully.
Diagnosis: Missing everything—no task specificity, no context, no format.
Fix:
“Write a response to this customer complaint (pasted below). Context: They’re a 3-year customer who had a shipping delay. We’ve already refunded shipping costs. Goal: Keep them as a customer while setting realistic expectations. Tone: Empathetic but not groveling. Format: Under 150 words.”
Practice: Fix These Prompts
Broken Prompt 1: “Analyze this data”
- What’s missing?
- Your improved version:
Broken Prompt 2: “Write something creative”
- What’s missing?
- Your improved version:
Broken Prompt 3: “Is this information correct?” [followed by AI-generated text]
- What’s wrong with this approach?
- What should you do instead?
(Try writing fixes before continuing)
Key Takeaways
- Vague prompts get generic responses—be specific about what “good” means
- Context is critical—AI only knows what you tell it
- Don’t trust AI to fact-check itself—verify important claims
- Break complex tasks into steps—one thing at a time
- Iterate on responses—first drafts are starting points
- Use the debugging framework when prompts fail
Up Next
Lesson 7 introduces advanced techniques: chain-of-thought prompting, role stacking, and other methods that unlock AI’s full potential.
Knowledge Check
Complete the quiz above first
Lesson completed!