Agent Anatomy: The Four Components
Explore the four components of every AI agent: the LLM brain, tools, memory, and planning. Understand how they interact through the agent loop.
Premium Course Content
This lesson is part of a premium course. Upgrade to Pro to unlock all premium courses and content.
- Access all premium courses
- 1000+ AI skill templates included
- New content added weekly
In the previous lesson, you learned that agents have four components: an LLM brain, tools, memory, and planning. Now let’s examine how each component works and how they combine into the agent loop.
Component 1: The LLM Brain
The LLM is the reasoning engine. It doesn’t just generate text — it makes decisions:
- What tool to use — “I need to search the web, not the database”
- What arguments to pass — “Search for ‘Q3 revenue Acme Corp 2025’”
- When to stop — “I have enough information to answer the question”
- How to recover — “That tool failed, let me try a different approach”
The brain processes everything through a system prompt that defines its role, available tools, and behavioral constraints. Think of it as the agent’s job description plus instruction manual.
Agent System Prompt Structure:
├── Role: "You are a research assistant..."
├── Available Tools: [web_search, file_read, calculator, ...]
├── Behavioral Rules: "Always cite sources. Never fabricate data."
├── Output Format: "Return findings as JSON with confidence scores."
└── Error Handling: "If a tool fails, try alternatives before asking the user."
✅ Quick Check: An agent’s system prompt says “You have access to: web_search, calculator, email_send.” The user asks: “What’s the capital of France?” Should the agent call web_search? (Answer: Not necessarily. The LLM already knows the capital of France. A well-designed agent only uses tools when the LLM’s own knowledge isn’t sufficient — like for real-time data, calculations beyond its ability, or actions like sending emails. Using a tool for known facts wastes time and money.)
Component 2: Tools
Tools are the mechanism that lets agents act in the world. Without tools, an agent is just a chatbot.
Tool Interface
Every tool has three parts:
{
"name": "web_search",
"description": "Search the web for current information. Use when you need data newer than your training cutoff or real-time information.",
"parameters": {
"query": {"type": "string", "description": "The search query"},
"max_results": {"type": "integer", "default": 5}
}
}
The LLM reads the description to decide when to use the tool. Clear descriptions = better tool selection. Vague descriptions = agents calling the wrong tool.
Common Tool Categories
| Category | Examples | When Used |
|---|---|---|
| Information | Web search, database query, file read | Agent needs data it doesn’t have |
| Computation | Calculator, code execution, data analysis | Tasks requiring precise math or logic |
| Communication | Email send, Slack message, API call | Agent needs to interact with external systems |
| Creation | File write, image generate, code write | Agent produces artifacts |
Component 3: Memory
Memory gives agents context that persists beyond a single message.
Short-Term Memory (Conversation Context)
Everything in the current conversation: user messages, agent responses, tool results. This lives in the LLM’s context window and disappears when the conversation ends.
Limitation: Context windows are finite. A 128K token window sounds large, but an agent processing documents and tool results can fill it quickly.
Long-Term Memory (Persistent Storage)
Information that survives across conversations:
- User preferences: “This user prefers concise answers with bullet points”
- Past interactions: “Last week, we discussed their Q3 budget — $50K for marketing”
- Learned knowledge: “The company’s API uses v3 endpoints, not v2”
Long-term memory is typically stored in a vector database or structured storage, retrieved when relevant to the current task.
✅ Quick Check: An agent helping with project management remembers that “the deadline was moved to March 15” from a conversation two weeks ago. But the user just said “the new deadline is April 1.” Which should the agent use? (Answer: April 1 — the most recent information should override older memory. This is a critical memory management challenge: agents need recency-weighted retrieval so that newer information takes priority over outdated memories. Without this, agents act on stale data.)
Component 4: Planning
Planning is how agents handle tasks too complex for a single action.
Simple Planning (Sequential)
Task: "Summarize the top 3 news stories about AI today"
Plan:
1. Search web for "AI news today"
2. Read the top 3 results
3. Summarize each story in 2 sentences
4. Combine into a single summary
Dynamic Planning (Adaptive)
Task: "Find and fix the bug in our login system"
Initial Plan:
1. Read the login code
2. Identify the error pattern
→ Observation: Error is a database timeout
Revised Plan:
3. Check database connection settings
4. Check query performance
→ Observation: Query takes 8 seconds on large user tables
Revised Plan:
5. Add an index on the user lookup column
6. Test with the same input
7. Verify fix resolves the original error
The plan changes based on what the agent discovers. This adaptive planning is what makes agents powerful for open-ended tasks.
The Agent Loop
All four components work together in a cycle:
┌──────────────────────────────────┐
│ 1. PERCEIVE │
│ Read user request + context │
│ Retrieve relevant memories │
└──────────────┬───────────────────┘
▼
┌──────────────────────────────────┐
│ 2. PLAN │
│ Break task into steps │
│ Select next action │
└──────────────┬───────────────────┘
▼
┌──────────────────────────────────┐
│ 3. ACT │
│ Call tool or generate response │
└──────────────┬───────────────────┘
▼
┌──────────────────────────────────┐
│ 4. OBSERVE │
│ Process tool result │
│ Update memory │
│ Check if task is complete │
└──────────────┬───────────────────┘
│
Task done? ──No──→ Back to PLAN
│
Yes
▼
Return result
This loop runs until the agent determines the task is complete — or hits a maximum iteration limit (a safety guardrail).
Practice Exercise
- Pick a task you do weekly (scheduling, reporting, email triage)
- Map it to the four components: What knowledge does the brain need? What tools would act on the task? What memory persists across sessions? What’s the planning sequence?
- Identify where a single LLM call would fail and why the loop is necessary
Key Takeaways
- The LLM brain reasons and decides — it’s the agent’s decision engine, not just a text generator
- Tools transform agents from conversational to actionable — they’re what separate agents from chatbots
- Memory has two layers: short-term (conversation context) and long-term (persistent across sessions)
- Planning handles complexity through decomposition — breaking big tasks into manageable steps
- The agent loop (perceive → plan → act → observe → repeat) ties all four components together
Up Next
In the next lesson, you’ll learn the three most important agent design patterns — ReAct, Reflection, and Planning — and when to use each one.
Knowledge Check
Complete the quiz above first
Lesson completed!