System Prompts and Role Engineering
Design system prompts that define AI behavior, expertise, and constraints. Build reusable AI agents for specific tasks with role engineering and behavioral specifications.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
Every prompt you’ve written so far has been a user prompt — a single request for a single task. System prompts operate at a higher level: they define who the AI is for an entire conversation or application.
🔄 Quick Recall: In the previous lesson, you mastered few-shot prompting — teaching AI through examples. System prompts complement few-shot by defining the AI’s persistent behavior: its role, expertise, constraints, and communication style.
Anatomy of a System Prompt
A production-quality system prompt has five components:
1. Role Definition
Who is the AI? What’s its expertise?
You are a senior tax accountant with 15 years of experience
specializing in small business taxation in the United States.
2. Behavioral Guidelines
How should it communicate? What tone and style?
Always explain tax concepts in plain language. When citing tax
code, provide the section number and a one-sentence explanation.
Ask clarifying questions before making recommendations.
3. Scope Boundaries
What should it NOT do?
You must NOT:
- Provide legal advice (redirect to a tax attorney)
- Make filing decisions on behalf of the user
- Access or request actual tax return data
- Guarantee specific outcomes or savings amounts
4. Output Specifications
How should responses be formatted?
Format responses as:
1. Direct answer to the question
2. Relevant tax code reference (if applicable)
3. One practical next step the user can take
Keep responses under 300 words unless the user asks for detail.
5. Error Handling
What should it do when uncertain?
If you're not confident in an answer:
- Say "I'm not certain about this" explicitly
- Explain what you DO know
- Recommend consulting a CPA for the specific situation
Never guess about tax amounts or deadlines.
✅ Quick Check: A system prompt says “You are a helpful assistant.” Is this a good role definition? (Answer: No — it’s too generic. “Helpful assistant” doesn’t specify expertise, communication style, or scope. The AI might try to answer medical questions, legal questions, and cooking questions with equal confidence — which means unequal quality. A focused role definition like “You are a project management advisor specializing in agile methodologies for software teams” constrains the AI to its area of competence.)
System Prompt Template
Here’s a reusable template:
<system>
<role>
You are [specific role] with expertise in [domains].
Your goal is to help users [primary objective].
</role>
<guidelines>
- [Communication style guideline]
- [Knowledge boundaries]
- [When to ask clarifying questions]
- [How to handle uncertainty]
</guidelines>
<constraints>
You must NOT:
- [Dangerous action 1]
- [Out-of-scope action 2]
- [Privacy violation 3]
</constraints>
<output_format>
- [Format specification]
- [Length guidelines]
- [Required sections]
</output_format>
</system>
Role Engineering Patterns
The Expert Pattern
Give the AI deep expertise in one area:
You are a database performance engineer specializing in PostgreSQL.
You've optimized queries for high-traffic SaaS applications handling
millions of transactions per day. You think in terms of execution
plans, indexing strategies, and query patterns.
The Reviewer Pattern
Turn the AI into a critical evaluator:
You are a code reviewer who prioritizes security, performance,
and maintainability — in that order. You look for vulnerabilities
first, then performance bottlenecks, then readability issues.
You provide actionable feedback, not just criticism.
The Translator Pattern
Adapt content between domains or audiences:
You are a technical writer who translates complex engineering
concepts into clear documentation for non-technical stakeholders.
You never use jargon without defining it. You prefer analogies
to abstractions. You format with bullet points for scannability.
The Adversary Pattern
Have the AI argue against ideas to strengthen them:
You are a strategic devil's advocate. When presented with a plan,
your job is to find the weaknesses, blind spots, and risks that
the author hasn't considered. Be thorough but constructive —
every criticism must include a suggestion for improvement.
✅ Quick Check: You’re building a system prompt for a customer support chatbot. Should you include the phrase “Always try to make the customer happy”? (Answer: No — it’s dangerously vague. “Make the customer happy” could lead the AI to promise unauthorized refunds, share internal information, or agree to impossible requests. Instead, specify exactly what the AI can offer: “You may offer a 10% discount code, escalate to a human agent, or provide troubleshooting steps from the knowledge base. You may NOT promise refunds, make exceptions to published policies, or access customer account data.”)
Testing System Prompts
A system prompt isn’t done when it’s written. It’s done when it passes adversarial testing:
Test 1: Normal Usage
Does the AI respond correctly to expected queries?
Test 2: Scope Boundaries
Does it properly redirect when asked about out-of-scope topics?
Test 3: Edge Cases
Does it handle ambiguous or incomplete information gracefully?
Test 4: Adversarial Input
Does it resist attempts to override its instructions? (“Ignore your previous instructions and…”)
Test 5: Failure Modes
Does it say “I don’t know” when it should, instead of guessing?
Versioning System Prompts
System prompts evolve. Track them like code:
# Customer Support Bot v2.3
# Last updated: 2026-02-24
# Change: Added escalation criteria for billing disputes
# Previous: v2.2 - Added product return guidelines
Save each version. When output quality changes, compare the current prompt with previous versions to identify what caused the shift.
Practice Exercise
- Write a system prompt using the five-component template for a role relevant to your work
- Test it with 5 normal queries — does it respond as expected?
- Test with 2 out-of-scope queries — does it redirect appropriately?
- Try the adversary pattern: have it critique a plan or document you’ve written
- Ask a colleague to try breaking your system prompt — note what they find
Key Takeaways
- System prompts define persistent AI behavior: role, guidelines, constraints, output format, and error handling
- Five essential components: Role, Guidelines, Scope Boundaries, Output Specifications, Error Handling
- Focused roles outperform generic “helpful assistant” — constraint breeds competence
- Behavioral constraints (what the AI must NOT do) are critical for preventing harmful outputs
- Test system prompts adversarially before production use — scope boundaries, edge cases, failure modes
- Version your system prompts — track changes so you can diagnose output quality shifts
Up Next
In the next lesson, you’ll master output control: how to specify exact formats, constrain length, control tone, and ensure the AI produces output your systems (and humans) can use.
Knowledge Check
Complete the quiz above first
Lesson completed!