Advanced Techniques
Go beyond basics: few-shot chaining, XML structuring, conditional logic, and multi-persona instructions for complex workflows.
🔄 In Lesson 3, you wrote your first custom instructions with RISEN. They work — but they handle one persona, one output style. Real workflows need more flexibility. Let’s level up.
Technique 1: XML Tag Structuring
When instructions get complex, formatting matters. XML tags create clear boundaries:
<identity>
You are a senior full-stack developer with expertise in
TypeScript, React, and Node.js.
</identity>
<instructions>
- Write production-ready code, not tutorials
- Include error handling for all async operations
- Use TypeScript strict mode conventions
</instructions>
<output_format>
1. Brief explanation of approach (2-3 sentences)
2. Complete code block
3. Edge cases to consider (bullet list)
</output_format>
<constraints>
- Never use `any` type in TypeScript
- Prefer functional components over class components
- Maximum 50 lines per function
</constraints>
Why this works: The AI can clearly distinguish between who it is, what it should do, how to format output, and what limits apply. Without tags, a long paragraph of mixed instructions gets muddled.
Anthropic specifically recommends XML for Claude. But the technique works on ChatGPT and Gemini too — clear structure helps any model.
✅ Quick Check: You have a long set of instructions with role, rules, format preferences, and examples all mixed together in one paragraph. Responses are inconsistent. What’s the fix? (Restructure with XML tags or clear section headers. Separating role from rules from format lets the AI parse each section independently instead of trying to extract meaning from a wall of text.)
Technique 2: Conditional Logic
One instruction set, multiple behaviors. Use “For X tasks” patterns:
<task_rules>
FOR CODE TASKS:
- Include type hints and docstrings
- Show error handling
- Add inline comments for complex logic
FOR WRITING TASKS:
- Use active voice
- Keep paragraphs to 3 sentences max
- Match the tone to the audience I specify
FOR DATA ANALYSIS:
- Show your methodology
- Include confidence levels
- Visualize with tables when helpful
FOR BRAINSTORMING:
- Go rapid-fire, 10+ ideas
- Include unconventional options
- Don't filter or critique yet
</task_rules>
The AI reads your request, identifies which category it falls into, and applies the relevant rules. You don’t have to switch settings — it adapts automatically.
Technique 3: Few-Shot Examples in Instructions
You used one example in Lesson 3. Now use multiple examples to teach complex patterns:
<examples>
When I share a business idea:
INPUT: "App that connects dog walkers with pet owners"
OUTPUT: "Market: $1.3B pet services. Competition: Rover, Wag.
Differentiator needed. Questions: What's your geographic focus?
Pricing model? How do you handle trust/safety?"
When I ask for code review:
INPUT: [paste of code]
OUTPUT: "3 issues found:
1. [Critical] SQL injection risk on line 12 — use parameterized queries
2. [Medium] Missing null check on user.email
3. [Style] Inconsistent naming: camelCase on L5, snake_case on L8"
When I ask a factual question:
INPUT: "What's the current US inflation rate?"
OUTPUT: "~3.1% as of early 2026 (BLS CPI data). But this changes
monthly — verify with the latest BLS release for your needs."
</examples>
Three examples, three different interaction types. The AI now has templates for how you want each kind of response structured.
✅ Quick Check: You write an example showing the AI how to give code review feedback. But when you ask for writing feedback, it uses the same format. What’s missing? (You need separate examples for each task type. One code review example only teaches the code review pattern. Add a writing feedback example to teach that pattern too.)
Technique 4: Metacognitive Instructions
Tell the AI how to think, not just what to say:
<thinking_process>
Before responding:
1. Identify the task type (code, writing, analysis, brainstorming)
2. Consider what I probably need vs. what I literally asked
3. If my request is ambiguous, ask ONE clarifying question
4. Check: does my response actually answer the question?
</thinking_process>
This is subtle but powerful. It forces the AI to pause and classify your request before jumping to a response. The result: fewer misunderstandings, more relevant answers.
Technique 5: Priority Stacking
When you have many rules, some matter more than others. Make priorities explicit:
<priorities>
ALWAYS (non-negotiable):
- Be truthful. If you don't know, say so.
- Cite sources for factual claims
- Ask before assuming on ambiguous requests
USUALLY (default behavior):
- Keep responses under 300 words
- Use bullet points for lists
- Lead with the answer
SOMETIMES (context-dependent):
- For creative tasks, be more expansive
- For quick questions, skip formatting
</priorities>
Priority stacking prevents the rigid-vs-flexible problem. The AI knows that truth-telling is non-negotiable, conciseness is a default that can flex, and expansion is situational.
Combining Techniques
Here’s what a well-structured advanced instruction set looks like:
<identity>
Senior product manager at a B2B SaaS startup.
Focus: conversion optimization, user research, data-driven decisions.
</identity>
<priorities>
ALWAYS: Be direct. Challenge my assumptions. Cite data.
USUALLY: Concise responses. Bullet points. Lead with the answer.
SOMETIMES: For strategy, be thorough. For brainstorming, be creative.
</priorities>
<task_rules>
FOR PRODUCT DECISIONS: Framework → data → recommendation → risks
FOR USER RESEARCH: Questions first, then analysis structure
FOR WRITING: Active voice, no jargon, specific numbers
FOR STRATEGY: Long-form analysis is okay. Include alternatives.
</task_rules>
<examples>
When reviewing a feature spec:
"3 gaps: (1) No success metrics defined. (2) Edge case: what
happens when [X]? (3) Timeline assumes zero dependencies — risky."
</examples>
This set handles multiple task types, has clear priorities, and shows the AI exactly what good output looks like — all in under 1,000 characters.
Key Takeaways
- XML tags create clear section boundaries — essential for complex instruction sets
- Conditional logic (“For code tasks… For writing tasks…”) lets one set handle multiple workflows
- Multiple few-shot examples teach the AI different response patterns for different task types
- Metacognitive instructions (“Before responding, identify the task type”) reduce misunderstandings
- Priority stacking (Always / Usually / Sometimes) balances rigidity with flexibility
Up Next
You know the techniques. But every platform implements them differently. In Lesson 5, we’ll master platform-specific features: ChatGPT’s Custom GPTs and Projects, Claude’s XML-optimized system, Gemini’s Gems, and Copilot’s memory system. Same principles, different superpowers.
Knowledge Check
Complete the quiz above first
Lesson completed!