Lesson 3 15 min

Your First AI Node: The Basic LLM Chain

Build an AI Email Classifier using n8n's Basic LLM Chain node — your first real AI automation workflow.

🔄 In Lesson 2, you learned how data flows through nodes and expressions. Now we’re going to make a node that actually thinks. You’ll build an AI Email Classifier — a workflow that reads incoming emails and auto-labels them by intent. No code. Just a prompt, an LLM, and the data flow patterns you already know.

n8n’s AI Node Taxonomy

n8n has two categories of AI nodes, and understanding the difference saves you from using a cannon when a slingshot will do.

Root nodes — These are standalone nodes you drag onto the canvas. They do the AI work:

  • Basic LLM Chain — Send a prompt, get a response. No tools, no memory. (This lesson)
  • AI Agent — An autonomous agent with tools, memory, and multi-step reasoning. (Lesson 4)
  • Q&A Chain — Answer questions from documents. (Lesson 6)
  • Summarization Chain — Summarize long text.
  • Text Classifier — Classify text into categories.
  • Sentiment Analysis — Detect positive/negative/neutral sentiment.

Sub-nodes — These attach to root nodes to extend them:

  • LLM providers — OpenAI, Claude, Gemini, Groq, Ollama (you pick which model powers the root node)
  • Memory — Simple, PostgreSQL, Redis (Lesson 5)
  • Tools — SerpAPI, Wikipedia, Code, HTTP Request (Lesson 4)
  • Vector stores — Supabase, Pinecone, Qdrant (Lesson 6)
  • Embeddings — OpenAI, Cohere, local models

The root node defines the behavior. The sub-nodes define the capabilities. An AI Agent root node with an OpenAI sub-node and a SerpAPI tool sub-node creates an agent that uses GPT-4o and can search the web.

Quick Check: You need to summarize a long document. Which root node would you choose — Basic LLM Chain, AI Agent, or Summarization Chain? (Answer: The Summarization Chain. It’s purpose-built for summarization with optimized chunking. The Basic LLM Chain would work too, but you’d need to handle long documents manually. The AI Agent is overkill — you don’t need tools or memory for straightforward summarization.)

Build: The AI Email Classifier

Let’s build your first AI workflow. This classifier reads an email and categorizes it as one of: inquiry, support, urgent, or spam.

Step 1: Set Up the Trigger (Test Mode)

Start a new workflow. Instead of a Gmail Trigger (which needs real emails), use this test setup:

  1. Add a Manual Trigger node
  2. Add a Set node and create these fields:
    • subject (string): "Server down — production is broken"
    • from (string): "ops-team@company.com"
    • body (string): "Our production server went down at 3am. All customer-facing services are offline. Need immediate help."

This simulates an incoming email. When the workflow is ready, you’ll swap this for a real Gmail Trigger.

Step 2: Add the Basic LLM Chain

  1. Add a Basic LLM Chain node after the Set node
  2. Click the node to open its configuration
  3. Under Model, click to add an OpenAI Chat Model sub-node:
    • Select your OpenAI credential
    • Model: gpt-4o-mini (fast, cheap, plenty smart for classification)
  4. In the Prompt field, write:
Classify this email into exactly one category.

Categories:
- inquiry: questions about products, services, or pricing
- support: technical problems, bugs, or help requests
- urgent: time-sensitive issues requiring immediate action
- spam: unsolicited marketing or irrelevant messages

Email subject: {{ $json.subject }}
Email from: {{ $json.from }}
Email body: {{ $json.body }}

Reply with exactly one word from: inquiry, support, urgent, spam
No explanation. No punctuation. Just the category word.

Notice the expressions — {{ $json.subject }}, {{ $json.from }}, {{ $json.body }} — pulling data from the Set node’s output. This is the connection between Lesson 2’s data flow and Lesson 3’s AI.

Step 3: Route Based on Classification

Add an IF node after the Basic LLM Chain. Configure it:

  • Condition: {{ $json.text }} contains urgent
  • True branch: Add a Slack node (or another notification node) to alert your team
  • False branch: Connect to a Google Sheets node to log the classification

The $json.text field contains the LLM’s response — in this case, a single word like “urgent” or “support.”

Step 4: Test It

Click “Test workflow.” Watch the data flow:

  1. Manual Trigger fires
  2. Set node creates fake email data
  3. Basic LLM Chain classifies it (you’ll see “urgent” in the output panel)
  4. IF node routes it to the right branch

Change the Set node’s test data to different email scenarios and run again. Try a sales pitch (should classify as spam), a pricing question (inquiry), and a bug report (support).

Quick Check: What if the LLM returns “URGENT” (uppercase) but your IF node checks for “urgent” (lowercase)? (Answer: The condition would fail. Fix this by either lowering the LLM output with a Set node expression — {{ $json.text.toLowerCase() }} — or by using “contains” instead of “equals” in the IF node with the comparison set to case-insensitive. Always normalize LLM output before routing.)

Prompt Engineering for n8n

Writing prompts for automation is different from chatting with an LLM. In a chat, verbose responses are fine. In a workflow, you need predictable, parseable output that downstream nodes can act on.

Three rules for n8n prompts:

1. Constrain the output format explicitly

Bad:  "What category is this email?"
Good: "Reply with exactly one word: inquiry, support, urgent, spam"

2. Provide the complete context in the prompt The LLM has no memory between executions. Every prompt must include all the data it needs — don’t assume it “knows” what you’re working on.

3. Use examples (few-shot) for complex tasks

Examples:
- "Can you tell me about pricing?"  inquiry
- "The export button doesn't work"  support
- "Production database is corrupted"  urgent
- "Buy discount watches now!"  spam

Now classify: {{ $json.body }}

Few-shot examples dramatically improve classification accuracy. For tasks where the boundary between categories is fuzzy (is “our report is late” urgent or support?), adding 3-5 examples per category is the difference between 70% and 95% accuracy.

From Test to Production

Once your classifier works with test data, swap the trigger:

  1. Delete or disconnect the Manual Trigger and Set node
  2. Add a Gmail Trigger node at the start
  3. Configure it with your Gmail credential
  4. Set the trigger to “New Email”
  5. Update the expressions in the Basic LLM Chain to match Gmail’s output structure:
    • Subject: {{ $json.subject }} (same)
    • From: {{ $json.from.value[0].address }}
    • Body: {{ $json.text }} (or {{ $json.snippet }} for shorter text)

Now activate the workflow. Every new email that arrives will be automatically classified and routed.

Key Takeaways

  • Root nodes (Basic LLM Chain, AI Agent, Q&A Chain) define the AI behavior; sub-nodes (LLM providers, memory, tools) define the capabilities
  • The Basic LLM Chain is your go-to for simple prompt→response tasks: classification, summarization, extraction
  • Constrain your prompt output — predictable formats make downstream routing reliable
  • Use a Manual Trigger + Set node for development, then swap to the real trigger for production
  • Few-shot examples in prompts significantly improve classification accuracy

Up Next

In Lesson 4, you’ll upgrade from the Basic LLM Chain to the AI Agent — n8n’s most powerful AI node. You’ll build a Multi-Tool Research Agent that can search the web, query Wikipedia, run code, and synthesize its findings. The difference: agents don’t just respond to prompts — they decide what to do.

Knowledge Check

1. What's the difference between the Basic LLM Chain and the AI Agent node?

2. Your email classifier returns 'The category is: support request' instead of just 'support'. How do you fix this?

3. You're building the email classifier and want to test it without waiting for real emails. What's the best approach?

Answer all questions to check

Complete the quiz above first

Related Skills