Lesson 7 12 min

Troubleshooting & Iteration

Why instructions get ignored, how to diagnose problems, and systematic techniques for fixing custom instructions that aren't performing.

🔄 You’ve built templates (Lesson 6) and installed them (Lesson 5). But here’s the reality: custom instructions don’t work perfectly on the first try. They need debugging, just like code. Let’s learn how.

The Five Most Common Problems

Problem 1: Instructions Are Too Vague

Symptom: The AI’s behavior doesn’t noticeably change after installing instructions.

Diagnosis: Read your instructions and ask: “Would a human colleague know exactly what to do with these?” If the answer is no, the AI doesn’t either.

Examples of vague vs. specific:

Vague (Ignored)Specific (Followed)
“Be concise”“Limit responses to 3 sentences unless I ask for more”
“Be professional”“Use formal register. No contractions. Address the reader as ‘you.’”
“Give good code”“Include type hints, docstrings, and error handling for all functions”
“Be creative”“Generate 10+ ideas including 3 unconventional ones”

The fix: Replace every adjective with a measurable behavior. If you can’t measure it, the AI can’t reliably follow it.

Quick Check: Your instruction says “write clearly.” The AI produces technically correct but dense academic prose. What went wrong? (“Write clearly” means different things to different audiences. Replace with specifics: “Use short sentences (under 20 words). One idea per paragraph. Grade 8 reading level.”)

Problem 2: Conflicting Instructions

Symptom: The AI follows some instructions but ignores others, seemingly at random.

Diagnosis: Look for contradictions in your instruction set:

  • “Be concise” + “Be thorough” → which takes priority?
  • “Always use bullet points” + your request for a cover letter → format conflict
  • “Never assume” + “Anticipate what I need” → behavioral conflict

The fix: Use priority stacking from Lesson 4. Make it explicit which rules override which:

“Default to concise (under 200 words). When I ask for detailed analysis, switch to thorough — ignore the length limit for those requests.”

Problem 3: Instruction Drift in Long Conversations

Symptom: The AI follows instructions well for the first 5-10 messages, then gradually stops.

Cause: As the conversation grows, your instructions (at the very top of the context) get farther from the AI’s immediate attention. AI models have a recency bias — they attend more to recent messages.

The fix:

  • Reinforce periodically: Every 10-15 messages, restate key rules: “Remember: keep it concise, bullet points.”
  • Start new conversations for new topics rather than using one endless thread
  • Use Claude Projects — their instructions stay prominently attached throughout

Problem 4: Over-Constrained Instructions

Symptom: The AI produces robotic, formulaic responses. Every response looks identical regardless of the question.

Diagnosis: You’ve written too many rigid rules with no flexibility.

The fix: Add breathing room:

  • Change “always” to “by default”
  • Add explicit exceptions for creative or unusual tasks
  • Remove rules that don’t meaningfully change behavior (“be helpful” — the AI already tries)

Problem 5: Wrong Level of Specificity

Symptom: Instructions work great for one type of task but terrible for others.

Cause: Your instructions are optimized for a single use case but you’re asking the AI to do multiple types of work.

The fix: Either use conditional logic (Lesson 4) to handle multiple task types, or create separate instruction sets for different workflows.

The Troubleshooting Protocol

When instructions aren’t working, follow this systematic process:

Step 1: Isolate — Test each instruction individually. Which specific rules are being followed? Which aren’t?

Step 2: Check for conflicts — Read all instructions together. Do any rules contradict each other?

Step 3: Measure — Replace subjective instructions (“be concise”) with measurable ones (“under 100 words”).

Step 4: Simplify — Remove instructions that don’t change behavior. Less is often more.

Step 5: Test — Run three different prompts and verify the AI follows instructions across all of them. One test isn’t enough.

Quick Check: Your instruction says “Be critical when reviewing my work.” But the AI still starts every review with “Great work!” What’s the fix? (Be more specific: “When reviewing my work, skip all positive feedback. Lead with the problems. List issues in order: critical → important → minor. Do not use phrases like ‘great work’ or ’nice job.’”)

The Iteration Cycle

Custom instructions aren’t write-once. Here’s a healthy iteration schedule:

Day 1: Install your initial instructions using RISEN Day 3: Note what’s working and what’s not Week 1: Review and adjust — fix specific problems Month 1: Major revision — you’ll have a clear picture of what you actually need Quarterly: Check if your role, tools, or workflow has changed

Each iteration makes your instructions tighter and more effective. Think of it like code: the first version works, but the tenth version is clean.

Security: What Not to Put in Instructions

A quick note on security. Custom instructions can be extracted by other users in some contexts (shared Custom GPTs, API system prompts). Don’t include:

  • Passwords, API keys, or tokens
  • Confidential company information
  • Personal data (home address, SSN)
  • Trade secrets

OWASP ranks prompt injection as the #1 AI vulnerability — 73% of production AI deployments are affected. Keep sensitive data out of instructions entirely.

Key Takeaways

  • Most instruction failures come from vagueness — replace subjective descriptions with measurable behaviors
  • Conflicting instructions cause inconsistent behavior — use priority stacking to resolve conflicts
  • Long conversations cause drift — reinforce key rules periodically or start fresh threads
  • Over-constrained instructions produce robotic output — add flexibility with “by default” and exceptions
  • Follow the 5-step troubleshooting protocol: Isolate → Check conflicts → Measure → Simplify → Test

Up Next

Final lesson. In the Capstone, you’ll build your personal instruction library — a collection of tested, refined instruction sets organized by workflow. Plus a maintenance plan to keep them current as AI platforms evolve.

Knowledge Check

1. Your custom instructions say 'be concise' but the AI keeps giving long responses. What's the most likely cause?

2. You write detailed custom instructions but the AI seems to follow them inconsistently. Which troubleshooting step comes first?

3. You notice the AI follows your instructions in short conversations but drifts in longer ones. What's happening?

Answer all questions to check

Complete the quiz above first

Related Skills