Structured Prompting: XML, JSON, and Frameworks
Master XML tags, JSON schemas, and the COSTAR framework to structure prompts that produce reliable, consistent AI output across any model.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
The biggest leap in prompt quality comes from structure. Not better words. Not longer prompts. Structure.
A structured prompt tells the AI exactly where each piece of information lives: what’s an instruction, what’s context, what’s an example, and what format the output should take. This eliminates the ambiguity that causes inconsistent results.
XML Tags: The Foundation
XML tags are the most versatile structuring tool. They work across all major AI models, and Claude was specifically trained with XML in its training data.
Basic Structure
<role>You are a financial analyst specializing in SaaS metrics.</role>
<task>Analyze the following quarterly data and provide insights.</task>
<data>
Q1 Revenue: $2.1M, Churn: 4.2%, NRR: 112%
Q2 Revenue: $2.4M, Churn: 3.8%, NRR: 118%
Q3 Revenue: $2.9M, Churn: 3.1%, NRR: 125%
</data>
<output_format>
1. Executive summary (2 sentences)
2. Three key metrics trends (with % changes)
3. One risk to flag
4. One opportunity to highlight
</output_format>
Compare this to the unstructured version: “You’re a financial analyst. Look at Q1: $2.1M revenue, 4.2% churn, 112% NRR. Q2: $2.4M, 3.8%, 118%. Q3: $2.9M, 3.1%, 125%. Give me insights about the trends and what we should focus on.”
Both contain the same information. The structured version produces more consistent, predictable results.
Common XML Tags
| Tag | Purpose | Example |
|---|---|---|
<role> | Define AI’s persona/expertise | <role>Senior data scientist</role> |
<task> | What to do | <task>Classify these support tickets</task> |
<context> | Background information | <context>We launched v2.0 last week</context> |
<constraints> | Rules and limits | <constraints>Max 200 words</constraints> |
<examples> | Few-shot examples | <examples>Input: X → Output: Y</examples> |
<output_format> | Desired format | <output_format>JSON with fields...</output_format> |
<input> | The data to process | <input>[user data here]</input> |
✅ Quick Check: You have a prompt with instructions, context, user input, and expected output format — all in one paragraph. What’s the risk? (Answer: The AI might misinterpret which parts are instructions vs. context vs. input. For example, if your context includes “always double-check calculations,” the AI might treat that as an instruction to it, when it was actually describing a company policy. XML tags prevent this:
<context>content is read as background information,<instructions>content is treated as commands to follow.)
JSON Schema for Output Control
When you need structured output that code can parse, specify a JSON schema:
<task>Extract contact information from the following email.</task>
<output_format>
Return a JSON object with these fields:
{
"name": "Full name of the sender",
"email": "Email address",
"company": "Company name (or null if not mentioned)",
"request": "One-sentence summary of what they want",
"urgency": "low | medium | high"
}
</output_format>
<input>
Hi, I'm Marcus Chen from DataFlow Inc. We urgently need help with our
API integration — our production system is down and we need someone
on a call within the hour. My direct line is marcus@dataflow.io.
</input>
Expected output:
{
"name": "Marcus Chen",
"email": "marcus@dataflow.io",
"company": "DataFlow Inc.",
"request": "Urgent help needed with API integration due to production system outage",
"urgency": "high"
}
The COSTAR Framework
COSTAR provides a complete prompting framework that works for any task:
| Component | What It Defines | Example |
|---|---|---|
| Context | Background and situation | “We’re a B2B SaaS company launching a new feature next month” |
| Objective | What you need | “Write the announcement email for existing customers” |
| Style | Communication approach | “Professional but warm, similar to Stripe’s communication style” |
| Tone | Emotional quality | “Excited but not hyperbolic — confident, not salesy” |
| Audience | Who will receive it | “Current customers who use the Pro plan” |
| Response | Format specification | “Subject line + 3-paragraph email body + CTA button text” |
COSTAR in Practice
<context>
Our company (TechCorp) sells project management software. We just added
AI-powered task prioritization. Beta testers loved it (92% satisfaction).
We launch publicly next Tuesday.
</context>
<objective>Write the product announcement email.</objective>
<style>Professional but warm. Short sentences. Similar to how Notion
communicates product updates.</style>
<tone>Confident and excited, but not hyperbolic. No "revolutionary"
or "game-changing" — let the feature speak for itself.</tone>
<audience>Current Pro plan customers who have been using the product
for 3+ months. They're familiar with the interface.</audience>
<response>
- Subject line (under 50 characters)
- Pre-header text (under 100 characters)
- Email body (3 paragraphs, each 2-3 sentences)
- One CTA button with text
</response>
✅ Quick Check: Two prompts produce the same content but one uses COSTAR and the other is a single paragraph. In which scenario does COSTAR matter MORE? (Answer: When the prompt is reusable — used by a team, embedded in an application, or run hundreds of times. A one-off personal prompt can be informal. But a prompt that runs in production or is shared across a team needs to produce the same quality regardless of who runs it. COSTAR’s explicit components prevent drift: every team member uses the same context, style, and output spec.)
Combining Structures
The most powerful prompts combine XML tags with COSTAR principles and JSON output:
<system>
You are a customer feedback analyst. You process product reviews
and extract structured insights.
</system>
<context>We sell a productivity app. Reviews are from the App Store.</context>
<task>
Analyze the following review and extract structured data.
</task>
<output_format>
{
"sentiment": "positive | negative | mixed",
"topics": ["list of topics mentioned"],
"feature_requests": ["any features the user wants"],
"bugs_mentioned": ["any bugs reported"],
"summary": "one-sentence summary"
}
</output_format>
<input>
{{REVIEW_TEXT}}
</input>
The {{REVIEW_TEXT}} placeholder makes this a template — you can swap in different reviews while the structure stays constant. This is how production prompts work: define the structure once, vary only the input.
Practice Exercise
- Take a prompt you use regularly and restructure it with XML tags
- Apply the COSTAR framework to a complex task (e.g., writing a proposal, analyzing data)
- Create a JSON output schema for extracting information from unstructured text
- Compare the output of your old prompt vs. the structured version — note consistency differences
Key Takeaways
- Structure beats length — organized prompts produce more consistent results than long, unstructured ones
- XML tags create clear boundaries between instructions, context, examples, and output format
- JSON schemas ensure output is parseable by code — essential for production applications
- COSTAR (Context, Objective, Style, Tone, Audience, Response) covers all dimensions of a complete prompt
- Templates with placeholders ({{VARIABLE}}) make structured prompts reusable across different inputs
- Claude was trained with XML tags; all major models respond well to structured prompts
Up Next
In the next lesson, you’ll learn reasoning techniques — chain-of-thought, tree-of-thought, and self-consistency — that make AI significantly better at complex, multi-step problems.
Knowledge Check
Complete the quiz above first
Lesson completed!