Capstone: Build a Complete AI Assistant
Put it all together — build a multi-tool AI assistant with RAG, memory, error handling, and MCP connectivity in a single production-ready workflow.
🔄 Seven lessons of nodes, agents, memory, RAG, and production patterns. Now it all comes together. In this capstone, you’ll build a single AI assistant that combines every technique from the course — and you’ll connect it to Claude Desktop via MCP so other AI tools can use your n8n workflows as tools.
The Capstone Project
You’ll build an AI Operations Assistant — a chatbot that can:
- Answer questions from your company knowledge base (RAG from Lesson 6)
- Search the web for current information (Agent tools from Lesson 4)
- Remember conversation context across sessions (Memory from Lesson 5)
- Run calculations and format data (Code tool from Lesson 4)
- Handle errors gracefully without crashing (Production patterns from Lesson 7)
- Be called from Claude Desktop or other AI tools via MCP
✅ Quick Check: Your capstone assistant needs to answer both internal policy questions and real-time market data questions. Which two tools make this possible? (Answer: The Vector Store Tool for internal policies — it searches your embedded company documents. And SerpAPI for real-time market data — it searches the web. The system prompt tells the agent which tool to use for which type of question.)
This is the kind of workflow that replaces a junior operations role — not because it replaces the person, but because it handles the repetitive lookup/answer/route tasks that eat up their day.
Architecture Overview
Here’s the complete workflow structure:
Chat Trigger
↓
AI Agent (GPT-4o)
├── Tool: Supabase Vector Store (company docs)
├── Tool: SerpAPI (web search)
├── Tool: Code Tool (calculations)
├── Tool: HTTP Request (internal APIs)
├── Memory: PostgreSQL Chat Memory
└── Error Output → Slack Alert + Fallback Response
One Chat Trigger. One AI Agent at the center. Four tools for different capabilities. Persistent memory. Error handling on every external node.
Building the Capstone
Step 1: Set Up the Foundation
- Create a new workflow named “AI Operations Assistant”
- Add a Chat Trigger node — this creates the chat interface
- Add an AI Agent node and connect it to the Chat Trigger
- Attach OpenAI Chat Model →
gpt-4o(you need strong reasoning for multi-tool decisions)
Step 2: Connect Your Tools
Attach four tools to the AI Agent:
Tool 1: Vector Store (RAG) Add a Vector Store Tool sub-node pointing to your Supabase vector store from Lesson 6. This gives the agent access to your company knowledge base.
Tool 2: Web Search Add a SerpAPI tool for real-time web information. The agent uses this when the knowledge base doesn’t have the answer.
Tool 3: Code Execution Add a Code Tool with JavaScript support. The agent can write and run code for calculations, data formatting, or processing.
Tool 4: HTTP Request Add an HTTP Request Tool. Configure it with your internal API endpoints (or a public API for practice). The agent can call APIs to check statuses, retrieve live data, or trigger actions.
Step 3: Add Persistent Memory
Attach PostgreSQL Chat Memory to the AI Agent:
- Connection: your PostgreSQL instance
- Session ID:
{{ $json.sessionId }}from the Chat Trigger - This loads conversation history so the agent remembers previous exchanges
Step 4: Write the System Prompt
This is the most important part — it orchestrates everything:
You are an AI Operations Assistant for the team. You have access to:
1. **Company Knowledge Base** (Vector Store): Internal docs, policies, procedures
→ Use for: product questions, policy lookups, internal processes
→ Always check here FIRST for internal questions
2. **Web Search** (SerpAPI): Real-time internet information
→ Use for: industry news, competitor data, external information
→ Only use when the knowledge base doesn't have the answer
3. **Code Tool**: JavaScript execution environment
→ Use for: calculations, data formatting, conversions
→ Show your work when doing math
4. **HTTP Request**: Internal API access
→ Use for: checking live system status, retrieving real-time data
Rules:
- For simple greetings and small talk, respond directly without using tools
- Always cite your source: "[Knowledge Base]" or "[Web: URL]"
- If you can't find reliable information, say so — never guess
- Keep responses concise (under 200 words) unless asked for detail
- Remember context from earlier in the conversation
Step 5: Add Error Handling
For each external tool node:
- Enable Retry on Fail (3 retries, exponential backoff)
- Configure the Error Output to route to a Set node with a fallback message: “I’m having trouble accessing that resource right now. Let me try to help with what I have, or please try again in a moment.”
Set up the global Error Workflow (from Lesson 7) to send Slack alerts when any execution fails.
Step 6: Test the Full System
Run these test scenarios through the chat:
| Test | What You’re Verifying |
|---|---|
| “What’s our refund policy?” | RAG retrieval from vector store |
| “What’s the current price of Bitcoin?” | Web search tool |
| “Calculate the compound interest on $10,000 at 5% over 3 years” | Code tool |
| “What did we talk about earlier?” | Memory persistence |
| Ask a question, close chat, reopen, ask a follow-up | Memory across sessions |
| Disconnect your internet and ask a web question | Error handling + fallback |
Bonus: MCP Connectivity
The Model Context Protocol (MCP) lets you expose your n8n workflows as tools that other AI assistants can call. This means Claude Desktop, for example, can use your Operations Assistant’s knowledge base without you copy-pasting anything.
n8n as an MCP Server (exposing your workflows):
- Add an MCP Server Trigger to a new workflow
- Define the tool name and description (e.g., “company_knowledge_base”)
- Connect it to a Q&A Chain with your vector store
- Configure Claude Desktop’s MCP settings to point to your n8n instance
n8n as an MCP Client (consuming external tools):
- Add an MCP Client Tool sub-node to your AI Agent
- Point it to any MCP server (like a filesystem server, database tool, or another n8n instance)
- The agent can now call external MCP tools as part of its reasoning loop
MCP is evolving rapidly. As of March 2026, n8n supports the HTTP Streamable transport protocol. Check the n8n docs for the latest MCP setup instructions, as the specifics change frequently.
Course Review
Here’s what you built across all 8 lessons:
| Lesson | What You Built | Key Concept |
|---|---|---|
| 1. Why n8n for AI | n8n account + orientation | 70+ AI nodes, LangChain foundation |
| 2. Fundamentals | First data pipeline | Triggers, nodes, expressions, credentials |
| 3. First AI Node | AI Email Classifier | Basic LLM Chain, prompt constraints, output parsing |
| 4. AI Agents | Multi-Tool Research Agent | ReAct loop, tools, system prompts |
| 5. Memory | Chatbot with Memory | PostgreSQL Memory, session IDs, token costs |
| 6. RAG | Knowledge Base Bot | Embeddings, vector stores, chunking strategy |
| 7. Production | Hardened workflows | Error handling, queue mode, credential security |
| 8. Capstone | Complete AI Assistant | All concepts combined + MCP |
What’s Next
You’ve completed the course. Here’s how to keep building:
- Automate your actual workflows — Start with the task you do most often. Email triage, report generation, data lookup — pick one and build it.
- Explore the template library — n8n has 8,000+ community templates, including 5,800+ AI-specific workflows. Filter by “AI” at n8n.io/workflows.
- Join the community — 200,000+ members on the n8n community forum. Post your workflows, get feedback, learn from others.
- Scale up — When a single instance isn’t enough, explore multi-worker queue mode, external secrets management, and CI/CD for workflow deployments.
The gap between “I use AI chatbots” and “I build AI automation” is smaller than you think. You just crossed it.
Key Takeaways
- The AI Agent node is the center of complex workflows — it orchestrates tools, memory, and reasoning in one node
- A good system prompt tells the agent when to use each tool, how to handle conflicts, and what format to use for output
- Error handling isn’t optional — retry logic, error outputs, and alerting are the difference between a prototype and a product
- MCP lets you expose n8n workflows as tools for other AI assistants — and consume external MCP tools inside your agents
- Start simple, test thoroughly, and add complexity only when the simpler approach isn’t enough
Knowledge Check
Complete the quiz above first
Lesson completed!