Skip to main content

AI Agents

AI Agents are autonomous systems that can reason about problems, use tools, and execute multi-step tasks to achieve goals. In n8n, agents are built on LangChain and can be easily integrated into your workflows.

What is an AI Agent?

An AI agent is a language model that can:
  • Reason: Think through problems step-by-step
  • Use Tools: Call external functions, APIs, or workflows
  • Make Decisions: Choose which tools to use and when
  • Maintain Context: Remember previous interactions with memory
  • Iterate: Try different approaches until a goal is achieved
Unlike simple LLM chains that execute linearly, agents can loop, make decisions, and adapt their approach based on intermediate results.

Agent Node

The AI Agent node (@n8n/n8n-nodes-langchain.agent) is the core component for building agent workflows in n8n.

Key Features

  • Multiple Input Types: Accept prompts from previous nodes, define custom prompts, or use guardrails
  • Tool Support: Connect multiple tools for the agent to use
  • Memory Integration: Add memory nodes for conversation context
  • Output Parsing: Structure agent responses with custom schemas
  • Streaming Support: Stream responses in real-time
  • Fallback Models: Configure backup language models for reliability

Configuration Options

{
  displayName: 'AI Agent',
  name: 'agent',
  inputs: [
    'main',                    // Input data
    'ai_languageModel',        // Required: LLM connection
    'ai_memory',               // Optional: Memory
    'ai_tool',                 // Optional: Tools (multiple)
    'ai_outputParser'          // Optional: Output parser
  ],
  outputs: ['main']
}

Building Your First Agent

1

Add a Trigger

Start with a Chat Trigger or Manual Chat Trigger node to receive input.
{
  "type": "@n8n/n8n-nodes-langchain.chatTrigger",
  "name": "When chat message received"
}
2

Add a Language Model

Connect a language model like OpenAI Chat Model.
{
  "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
  "parameters": {
    "model": "gpt-4",
    "temperature": 0.7
  }
}
3

Add Tools (Optional)

Connect tool nodes to give your agent capabilities.Available tools:
  • Calculator Tool
  • HTTP Request Tool
  • Wikipedia Tool
  • Workflow Tool (call other n8n workflows)
  • Code Tool
  • Vector Store Tool
4

Add Memory (Optional)

Connect a memory node to maintain conversation context.
{
  "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
  "parameters": {
    "contextWindowLength": 10
  }
}
5

Configure the Agent

Add the AI Agent node and configure the prompt.
{
  "type": "@n8n/n8n-nodes-langchain.agent",
  "parameters": {
    "promptType": "auto"
  }
}

Agent Architecture

From the source code (/home/daytona/workspace/source/packages/@n8n/nodes-langchain/nodes/agents/Agent/V3/AgentV3.node.ts:25), the agent uses a “Tools Agent” architecture:
export class AgentV3 implements INodeType {
  description: INodeTypeDescription;
  
  constructor(baseDescription: INodeTypeBaseDescription) {
    this.description = {
      ...baseDescription,
      version: [3, 3.1],
      defaults: {
        name: 'AI Agent',
        color: '#404040',
      },
      builderHint: {
        inputs: {
          ai_languageModel: { required: true },
          ai_memory: { required: false },
          ai_tool: { required: false },
          ai_outputParser: { required: false }
        }
      }
    };
  }
}

Agent Tools

Tools extend what your agent can do. Here are the built-in tools:

Calculator Tool

Perform mathematical calculations:
{
  "type": "@n8n/n8n-nodes-langchain.toolCalculator",
  "name": "Calculator"
}

HTTP Request Tool

Make API calls:
{
  "type": "@n8n/n8n-nodes-langchain.toolHttpRequest",
  "parameters": {
    "method": "GET",
    "url": "https://api.example.com"
  }
}

Workflow Tool

The most powerful tool - call other n8n workflows! From the source (/home/daytona/workspace/source/packages/@n8n/nodes-langchain/nodes/tools/ToolWorkflow/ToolWorkflow.node.ts:10):
{
  displayName: 'Call n8n Sub-Workflow Tool',
  name: 'toolWorkflow',
  description: 'Uses another n8n workflow as a tool. Allows packaging any n8n node(s) as a tool.'
}
The Workflow Tool is incredibly powerful - you can turn ANY n8n workflow into a tool that your agent can use. This means you can give your agent access to databases, APIs, file systems, and more!

Vector Store Tool

Query vector databases for semantic search:
{
  "type": "@n8n/n8n-nodes-langchain.toolVectorStore",
  "parameters": {
    "name": "knowledge_base",
    "description": "Search the company knowledge base"
  }
}

Code Tool

Execute JavaScript or Python code:
{
  "type": "@n8n/n8n-nodes-langchain.toolCode",
  "parameters": {
    "language": "javascript",
    "code": "// Your code here"
  }
}

Wikipedia Tool

Search Wikipedia:
{
  "type": "@n8n/n8n-nodes-langchain.toolWikipedia"
}

SerpAPI Tool

Search Google:
{
  "type": "@n8n/n8n-nodes-langchain.toolSerpApi"
}

Wolfram Alpha Tool

Computational knowledge:
{
  "type": "@n8n/n8n-nodes-langchain.toolWolframAlpha"
}

Memory for Agents

Memory allows agents to maintain context across multiple interactions. Choose the right memory type for your use case:

Simple Memory (Development)

Node: memoryBufferWindow
Best for: Development and testing
{
  "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
  "parameters": {
    "sessionIdType": "customKey",
    "sessionKey": "{{ $json.sessionId }}",
    "contextWindowLength": 10
  }
}
Simple Memory stores data in n8n’s memory. Not suitable for production environments with Queue Mode or multi-main setups.

Redis Memory (Production)

Node: memoryRedisChat
Best for: Production with distributed workers
{
  "type": "@n8n/n8n-nodes-langchain.memoryRedisChat",
  "credentials": "redisApi"
}

Postgres Memory

Node: memoryPostgresChat
Best for: Persistent, queryable conversation history

MongoDB Memory

Node: memoryMongoDbChat
Best for: Document-based storage with flexibility

Zep Memory

Node: memoryZep
Best for: Advanced features like automatic summarization and fact extraction

Output Parsing

Structure agent responses into reliable formats:

Structured Output Parser

Define custom JSON schemas:
{
  "type": "object",
  "properties": {
    "answer": { "type": "string" },
    "confidence": { "type": "number" },
    "sources": { "type": "array" }
  }
}

Auto-fixing Output Parser

Automatically fix malformed JSON responses:
{
  "type": "@n8n/n8n-nodes-langchain.outputParserAutofixing"
}

Item List Output Parser

Extract lists from responses:
{
  "type": "@n8n/n8n-nodes-langchain.outputParserItemList"
}

Advanced Agent Patterns

Multi-Step Reasoning Agent

{
  "workflow": {
    "nodes": [
      { "type": "chatTrigger" },
      { "type": "lmChatOpenAi", "model": "gpt-4" },
      { "type": "toolCalculator" },
      { "type": "toolWikipedia" },
      { "type": "toolWorkflow", "description": "Database query tool" },
      { "type": "memoryRedisChat" },
      { "type": "agent" }
    ]
  }
}
{
  "workflow": {
    "nodes": [
      { "type": "chatTrigger" },
      { "type": "lmChatOpenAi" },
      { "type": "toolVectorStore", "vectorStore": "pinecone" },
      { "type": "memoryBufferWindow" },
      { "type": "agent" }
    ]
  }
}

Research Agent

{
  "workflow": {
    "nodes": [
      { "type": "manualChatTrigger" },
      { "type": "lmChatOpenAi", "model": "gpt-4" },
      { "type": "toolSerpApi" },
      { "type": "toolWikipedia" },
      { "type": "toolHttpRequest" },
      { "type": "agent" }
    ]
  }
}

Best Practices

Clear Tool Descriptions: The agent decides which tools to use based on their descriptions. Make them clear and specific!

Prompt Engineering

  • Be specific about the agent’s role and capabilities
  • Provide examples of desired behavior
  • Set clear boundaries and constraints
  • Define success criteria

Tool Design

  • Keep tools focused on a single responsibility
  • Provide comprehensive tool descriptions
  • Handle errors gracefully
  • Return structured data when possible

Memory Management

  • Choose appropriate context window length
  • Use session IDs to separate conversations
  • Clean up old sessions periodically
  • Consider costs of large context windows

Performance

  • Use faster models (GPT-3.5) for tool selection
  • Limit the number of available tools
  • Implement tool result caching when possible
  • Monitor token usage and costs

Error Handling

  • Enable “Continue on Fail” in production
  • Implement fallback language models
  • Add error handling in custom tools
  • Log agent reasoning for debugging

OpenAI Assistant Node

For OpenAI-specific features, use the OpenAI Assistant node:
{
  "type": "@n8n/n8n-nodes-langchain.openAiAssistant",
  "parameters": {
    "assistantId": "asst_xxx"
  }
}
Features:
  • Code Interpreter
  • File Search
  • Function Calling
  • Persistent threads

Debugging Agents

Enable Streaming

Watch the agent’s thought process in real-time:
{
  "parameters": {
    "enableStreaming": true
  }
}

Check Intermediate Steps

The agent response includes metadata about tool usage:
{
  "output": "Final answer",
  "intermediateSteps": [
    {
      "action": "calculator",
      "input": "12 * 45",
      "output": "540"
    }
  ]
}

Common Issues

Agent Loops Forever

  • Check tool descriptions are clear
  • Ensure tools return useful data
  • Set max iterations limit
  • Verify prompt instructs agent when to stop

Agent Doesn’t Use Tools

  • Make tool descriptions more specific
  • Adjust temperature (try 0.7-0.9)
  • Check tool is actually connected
  • Try a more capable model (GPT-4)

High Token Usage

  • Reduce context window length
  • Limit number of tools
  • Use smaller embedding models
  • Implement result summarization

Next Steps

LangChain Nodes

Explore all available LangChain nodes

Vector Stores

Add knowledge retrieval to your agents

Example Workflows

Browse agent workflow templates

Advanced Tutorial

Complete agent building tutorial