Lesson 4 15 min

AI Agents: Tools, Prompts & Decision-Making

Build a Multi-Tool Research Agent using n8n's AI Agent node — with web search, Wikipedia, and code execution tools.

🔄 In Lesson 3, you built an email classifier with the Basic LLM Chain — a single prompt that goes in, a single response that comes out. But what about tasks where the AI needs to decide what to do, not just answer a question? That’s the AI Agent. And it changes everything about what your workflows can accomplish.

What Makes an Agent an Agent

The Basic LLM Chain is like asking someone a question and getting an answer. An AI Agent is like hiring a research assistant — you describe the task, and they figure out the steps.

Here’s what the AI Agent node does that the Basic LLM Chain doesn’t:

  1. Receives a task (from the system prompt + user input)
  2. Reasons about which tools to use (the “Re” in ReAct)
  3. Acts by calling a tool (the “Act” in ReAct)
  4. Observes the tool’s output
  5. Loops — decides whether to call another tool or return a final answer

This loop is called the ReAct pattern (Reason + Act). The agent might search the web, read the results, decide it needs more context, search Wikipedia, combine the findings, and then write a summary — all from a single input.

Since n8n v1.82.0, all agent types (OpenAI Functions, Plan-and-Execute, Conversational, etc.) are unified under the Tools Agent. You don’t need to choose an agent type — just connect your tools, configure the system prompt, and the Tools Agent handles the routing.

The Three Pillars of an n8n Agent

Every AI Agent needs three things configured:

1. An LLM Provider (the brain) Attach an OpenAI, Anthropic, Google, or Groq chat model sub-node. This is the model that does the reasoning and decision-making. For agents, use a capable model — gpt-4o or claude-3.5-sonnet are good defaults. Smaller models often fail at multi-step tool use.

2. Tools (the hands) Tools are sub-nodes that give the agent capabilities. Without tools, an agent is just an expensive chat node. n8n includes:

ToolWhat It Does
SerpAPISearches the web (Google results)
WikipediaQueries Wikipedia for factual info
Code ToolWrites and runs JavaScript or Python
HTTP Request ToolCalls any API
CalculatorPerforms math
Workflow ToolCalls another n8n workflow as a tool
MCP Client ToolCalls any MCP server (Lesson 6)

3. A System Prompt (the instructions) The system prompt tells the agent who it is, what tools it has, and how to use them. This is your most powerful control lever — a good system prompt makes the difference between a confused agent and a reliable one.

Quick Check: You connect 3 tools to an AI Agent but don’t write a system prompt. What happens? (Answer: The agent will still work, but it’ll use tools inconsistently. Without instructions, the agent makes its own decisions about when and how to use each tool — which often means it picks the first tool that seems relevant and ignores the others. A system prompt gives you control over tool selection.)

Build: Multi-Tool Research Agent

You’ll build an agent that takes a research question, searches the web for recent data, checks Wikipedia for context, and writes a synthesized summary.

Step 1: Create the Base

  1. New workflow → add a Chat Trigger (this creates a chat interface for testing)
  2. Add an AI Agent node
  3. Connect the Chat Trigger to the AI Agent

Step 2: Attach the LLM

Click the AI Agent node → under Model, add an OpenAI Chat Model sub-node:

  • Credential: your OpenAI key
  • Model: gpt-4o (agents need strong reasoning — gpt-4o-mini may struggle with multi-tool tasks)

Step 3: Connect Tools

Still inside the AI Agent node, add three tools:

Tool 1: SerpAPI (web search)

  • Add credential: sign up at serpapi.com for a free tier (100 searches/month)
  • The agent can now search Google for real-time information

Tool 2: Wikipedia

  • No credential needed — it queries Wikipedia’s public API
  • Good for factual definitions, historical context, and background info

Tool 3: Code Tool

  • Language: JavaScript
  • No credential needed — it runs code in n8n’s sandbox
  • The agent can write and execute code for calculations, data processing, or formatting

Step 4: Write the System Prompt

This is the critical part. In the AI Agent’s configuration, find the System Prompt field and write:

You are a research assistant. When given a question:

1. ALWAYS search the web first using SerpAPI to find recent, current data
2. Use Wikipedia for factual background, definitions, and historical context
3. Use the Code tool when you need to calculate numbers, process data, or format results
4. Synthesize findings from multiple sources into a clear, cited summary

Rules:
- Cite your sources (web URL or "Wikipedia: Article Name")
- If web results and Wikipedia conflict, prefer the more recent source
- If you can't find reliable information, say so — never make up facts
- Keep your final summary under 300 words

Notice how specific this is. You’re telling the agent when to use each tool, how to handle conflicts, and what format to use for output. Vague prompts produce vague agents.

Step 5: Test It

Click “Test workflow” and use the chat interface. Try these questions:

  • “What is the current market cap of NVIDIA and how has it changed since 2023?”
  • “Compare the population of Tokyo and New York, including metro areas”
  • “What is retrieval-augmented generation (RAG) and when was the concept first published?”

Watch the agent’s reasoning in the output panel. You’ll see it decide which tool to call, process the result, and decide whether to make another call.

Quick Check: Your agent searches the web for “NVIDIA market cap” but the results are outdated. How can you improve this? (Answer: Add a date constraint to your system prompt: “When searching for financial data, include the current year in your search query.” You can also add “Always include 2026 in search queries for time-sensitive data” to the system prompt. The agent will then search “NVIDIA market cap 2026” instead of the bare query.)

System Prompt Engineering for Agents

Writing system prompts for agents is a different skill than writing prompts for basic LLM chains. With chains, you control the exact input. With agents, you control the strategy — the agent decides the specifics.

Three patterns that work:

Pattern 1: Tool Selection Rules Tell the agent explicitly when to use each tool:

Use SerpAPI for: current events, prices, statistics, recent news
Use Wikipedia for: definitions, history, scientific concepts, biographical data
Use Code for: calculations, data formatting, converting units

Pattern 2: Step-by-Step Strategy Give the agent an explicit workflow:

For every question:
1. Search the web for current data
2. Check Wikipedia for context
3. Cross-reference both sources
4. Write a summary with citations

Pattern 3: Output Format Specification Define exactly what the response should look like:

Format your response as:
## Summary
[2-3 paragraph overview]

## Key Data Points
- [bullet points with specific numbers]

## Sources
- [list of URLs and Wikipedia articles used]

Agents that have clear instructions about what to do, when to use each tool, and how to format output are dramatically more reliable than agents with generic “be helpful” prompts.

The Four Agent Architecture Patterns

As your workflows get more complex, you’ll encounter four patterns:

PatternHow It WorksWhen to Use
Chained RequestsSequential LLM calls — output of one becomes input of nextMulti-step processing (classify → extract → summarize)
Single AgentOne agent + tools + reasoning loopMost tasks (research, Q&A, data processing)
Gatekeeper + SpecialistsA coordinator agent delegates to specialist agentsComplex tasks with distinct subtasks
Multi-Agent TeamsMultiple agents collaborate in a meshAdvanced orchestration (Lesson 8)

For this course, you’ll work with single agents (Lessons 4-6) and touch on the gatekeeper pattern in the capstone (Lesson 8). Start simple — most real workflows only need a single well-configured agent.

Key Takeaways

  • The AI Agent uses a ReAct loop — reason about the task, call a tool, observe the result, decide what’s next
  • Every agent needs three things: an LLM provider (the brain), tools (the hands), and a system prompt (the instructions)
  • The system prompt is your primary control lever — be specific about when to use each tool and how to format output
  • Since v1.82.0, all agent types are unified as the Tools Agent — no need to choose between agent frameworks
  • Use capable models for agents (gpt-4o, claude-3.5-sonnet) — smaller models often fail at multi-step reasoning

Up Next

Your research agent is smart, but it forgets everything between conversations. Ask it a follow-up question and it has no idea what you were talking about. In Lesson 5, you’ll add memory — the ability for agents to remember previous conversations across sessions. You’ll build a chatbot that actually knows who you are.

Knowledge Check

1. What's the key difference between a Basic LLM Chain and an AI Agent in n8n?

2. Your research agent consistently ignores the Wikipedia tool and only uses web search. How would you fix this?

3. An AI Agent with 6 tools connected makes 12 API calls to complete a single task. What's the cost implication in n8n?

Answer all questions to check

Complete the quiz above first

Related Skills