Building Your First Agent
Build a working AI agent from scratch — define the goal, set up tools, implement the reasoning loop, and watch it execute a real task autonomously.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
🔄 Quick Recall: In the previous lesson, you learned agent architecture: the reasoning loop, goal design, tools, memory, and agent patterns. Now let’s turn theory into a working agent.
Your First Agent: The Research Assistant
We’ll build a research agent step by step. This agent takes a research question, searches for information, evaluates sources, synthesizes findings, and produces a structured report.
Why this task? It demonstrates every core agent capability: goal decomposition, tool use (web search), reasoning about quality, and producing a deliverable.
Step 1: Craft the System Prompt
The system prompt is your agent’s operating manual. Here’s a template:
You are a research agent. Your job is to thoroughly research a topic and produce a well-sourced summary report.
CAPABILITIES:
- You can search the web for current information
- You can read and analyze documents
- You can compare information across multiple sources
PROCESS:
1. Break the research question into 3-5 specific sub-questions
2. Research each sub-question using web search
3. For each finding, note the source and assess reliability
4. Look for contradictions or disagreements between sources
5. Synthesize findings into a structured report
OUTPUT FORMAT:
- Executive summary (3-5 sentences)
- Key findings (organized by sub-question)
- Sources used (with reliability assessment)
- Areas of uncertainty or conflicting information
- Recommendations for further research
CONSTRAINTS:
- Use at least 3 different sources per sub-question
- Prefer recent sources (last 2 years) unless historical context matters
- Flag any information you're uncertain about
- If you can't find reliable information on a sub-question, say so honestly
- Maximum 10 search queries to stay efficient
This prompt gives the agent a clear role, defined capabilities, a step-by-step process, an output format, and constraints. Every component matters.
✅ Quick Check: Why does the system prompt include constraints like “maximum 10 search queries”?
Without constraints, agents can loop indefinitely, searching for slightly better information. Constraints force efficiency: the agent must plan its searches strategically rather than exhaustively. They also control cost, since each tool call may incur API costs.
Step 2: Test with a Simple Task
Don’t start with “research the future of artificial intelligence.” Start narrow:
Research question: What are the current pricing plans for the top 3 project management tools (Asana, Monday.com, and ClickUp) as of 2026?
Follow your research process. For each tool:
1. Find the current pricing tiers
2. Note what's included in each tier
3. Identify the best value for a team of 10 people
4. Note any recent pricing changes
Deliver the results in a comparison table plus a 3-sentence recommendation.
This task is scoped tightly. You can verify the results independently. If the agent gets this right, the architecture works.
Step 3: Observe the Reasoning Loop in Action
Watch how the agent works through the task:
Cycle 1 — Plan: “I need pricing for three tools. I’ll research each one separately, then compare.”
Cycle 2 — Research Tool A: Agent searches for “Asana pricing plans 2026.” Reads results. Extracts tier names, prices, and features.
Cycle 3 — Research Tool B: Agent searches for “Monday.com pricing 2026.” Same extraction process.
Cycle 4 — Research Tool C: Agent searches for “ClickUp pricing plans 2026.” Extracts data.
Cycle 5 — Compare and synthesize: Agent builds comparison table, calculates best value for 10-person team, writes recommendation.
Cycle 6 — Evaluate: “Do I have pricing for all three? Yes. Are the sources current? Let me check dates. Is the comparison complete? Yes. Delivering results.”
This is the loop in action. The agent managed six cycles autonomously. You defined the goal; the agent handled the rest.
Step 4: Refine Based on Results
Your first agent won’t be perfect. Common issues and fixes:
Problem: Agent searches are too broad. Fix: Add to system prompt: “Use specific search queries including the product name and the specific data point you need. Example: ‘Asana pricing tiers 2026’ not ‘project management tools pricing.’”
Problem: Agent includes outdated information. Fix: Add: “Always check the date of your sources. If a source is more than 6 months old for pricing data, search for a more recent one.”
Problem: Agent stops too early. Fix: Add quality criteria: “Before delivering, verify you have: pricing for all requested tiers, feature lists for each, and sources for every data point. If anything is missing, search again.”
Problem: Agent loops without making progress. Fix: Add a loop-breaker: “If you’ve searched for the same information three times without finding it, mark it as ‘unavailable’ and move on.”
Building Agents on Current Platforms
You don’t need custom code to build agents. Current platforms support agentic behavior:
Claude (Anthropic): Use Claude’s tool use feature with system prompts that define the agent’s role and process. Claude can use computer tools, search, and multi-step reasoning.
ChatGPT (OpenAI): Create Custom GPTs with specific instructions, knowledge files, and connected actions (APIs). The GPT acts as an agent within its configured capabilities.
Open-source frameworks: Tools like LangChain, CrewAI, and AutoGen let you build agents with code, connecting any AI model to any tool set.
For non-developers, Custom GPTs and Claude Projects are the fastest path. For developers, frameworks offer more control and customization.
Agent Prompt Engineering
Agent system prompts need more structure than regular prompts. Include these sections:
- Identity — Who the agent is and its expertise
- Capabilities — What tools and abilities it has
- Process — Step-by-step workflow to follow
- Output format — Exactly what the deliverable looks like
- Constraints — Limits on behavior, resources, and scope
- Error handling — What to do when things go wrong
- Quality criteria — How to evaluate its own work before delivering
Help me write a system prompt for an agent that [describe what you want the agent to do].
Include all seven sections: identity, capabilities, process, output format, constraints, error handling, and quality criteria.
Make the process specific enough that the agent can follow it without ambiguity, but flexible enough to adapt to unexpected situations.
Exercise: Build and Test Your Research Agent
- Write a system prompt using the template and seven sections above
- Choose a narrow research question you can verify independently
- Run the agent (using Claude, ChatGPT, or your platform of choice)
- Evaluate the result: Did it follow the process? Are the sources reliable? Is the output complete?
- Identify one weakness and add a rule to the system prompt to fix it
- Run the improved agent on a slightly harder question
Iterate at least twice. Each cycle makes your agent more reliable.
Key Takeaways
- The system prompt is the single most important piece of agent design — it defines role, process, constraints, and quality criteria
- Start with simple, verifiable tasks to test your agent architecture before adding complexity
- Observe the reasoning loop in action: plan, research, evaluate, synthesize, deliver
- Common agent failures (broad searches, outdated info, early stopping, infinite loops) have specific system prompt fixes
- Current platforms (Claude, ChatGPT, open-source frameworks) all support agentic behavior — no custom code required
- Agent prompt engineering has seven essential sections; missing any one reduces reliability
Up Next: In the next lesson, we’ll go deep on tool use — giving agents the ability to search, calculate, code, and interact with external services.
Knowledge Check
Complete the quiz above first
Lesson completed!