Skip to main content

LangChain Integration

LangChain is a framework for developing applications powered by language models. n8n provides comprehensive integration with LangChain, allowing you to build sophisticated AI workflows with agents, chains, tools, memory, and more.

Overview

The n8n LangChain integration includes:

AI Agents

Autonomous agents that can reason, plan, and use tools

Chains

Sequential processing pipelines for AI tasks

Chat Models

Connect to various LLM providers

Tools

Extend agent capabilities with custom tools

Memory

Maintain conversation context and history

Vector Stores

Store and retrieve embeddings for RAG

Core Concepts

LangChain in n8n

n8n implements LangChain concepts as interconnected nodes: Node Types:
  • Root Nodes: Execute and produce outputs (AI Agent, Chain LLM)
  • Sub-Nodes: Provide capabilities to root nodes (Chat Model, Memory, Tools)
  • Connection Types: Different input types (Model, Memory, Tool, Vector Store)

AI Agents

AI Agents are autonomous systems that can reason, plan, and execute tasks using connected tools.

AI Agent Node

The AI Agent is the most powerful LangChain node in n8n.
1

Add AI Agent Node

Add the AI Agent node to your workflow as a root node.
2

Connect a Chat Model

Connect a chat model node to the Model input:
  • OpenAI Chat Model
  • Anthropic Chat Model
  • Google Gemini Chat Model
  • Other LLM providers
3

Add Tools (Optional)

Connect tool nodes to the Tools input:
  • HTTP Request Tool
  • Code Tool
  • Calculator Tool
  • Custom n8n Tools
  • Vector Store Retrieval Tool
4

Add Memory (Optional)

Connect a memory node to the Memory input:
  • Buffer Memory
  • Window Buffer Memory
  • Conversation Summary Memory
5

Configure Options

Set agent options:
  • System message
  • Prompt type
  • Max iterations
  • Return intermediate steps

Agent Types

Best for chat-based interactions with natural conversation flow.Features:
  • Multi-turn conversations
  • Memory integration
  • Natural language understanding
  • Tool selection based on context
Use Cases:
  • Customer support chatbots
  • Virtual assistants
  • Interactive Q&A systems
Example Configuration:
{
  "promptType": "conversational",
  "systemMessage": "You are a helpful assistant that can search the web and access our database."
}

Agent Configuration

{
  "systemMessage": "You are a helpful AI assistant.",
  "promptType": "conversational",
  "maxIterations": 10
}

Chains

Chains are sequential processing pipelines that connect multiple LangChain components.

Chain LLM Node

Simple chain for basic LLM tasks without agent complexity. Use Cases:
  • Simple text generation
  • Template-based responses
  • Single-step processing
  • Faster than agents
Example:

Chain Retrieval QA Node

Question-answering chain with vector store retrieval (RAG). Components:
  • Vector Store Retrieval Tool
  • Chat Model
  • (Optional) Memory
Example: Configuration:
{
  "returnSourceDocuments": true,
  "topK": 4,
  "options": {
    "chainType": "stuff"
  }
}

Chat Models

Chat models are the LLM providers that power your agents and chains.

Available Chat Models

Node: OpenAI Chat ModelModels:
  • GPT-4 Turbo (most capable)
  • GPT-4 (balanced)
  • GPT-3.5 Turbo (fast and economical)
  • o1/o3 (advanced reasoning)
Features:
  • Function calling
  • JSON mode
  • Vision (GPT-4 Turbo)
  • 128K context (GPT-4 Turbo)
Learn more →

Tools

Tools extend agent capabilities by giving them access to external systems and functions.

Built-in Tools

HTTP Request Tool

Make API calls to external services
  • RESTful APIs
  • Authentication support
  • Custom headers

Code Tool

Execute JavaScript or Python code
  • Data transformation
  • Custom logic
  • Library access

Calculator Tool

Perform mathematical calculations
  • Basic arithmetic
  • Complex expressions
  • Numeric operations

Vector Store Tool

Search vector databases
  • Semantic search
  • RAG workflows
  • Document retrieval

Custom n8n Tools

Create custom tools using any n8n workflow:
1

Create Tool Workflow

Build a workflow that performs your tool’s function.
2

Add Workflow Tool Node

Add the Workflow Tool node to your agent workflow.
3

Configure Tool

  • Select the tool workflow
  • Set tool name and description
  • Define input/output schema
4

Connect to Agent

Connect the tool to your AI Agent’s Tools input.
Example Tool Structure:

Agent Tool (Sub-Agents)

Create multi-agent systems with specialized sub-agents: Benefits:
  • Specialized expertise
  • Different models for different tasks
  • Parallel processing
  • Modular architecture

Memory

Memory allows agents to maintain context across multiple interactions.

Memory Types

Stores all conversation history.Features:
  • Complete history
  • Simple implementation
  • No data loss
Use Cases:
  • Short conversations
  • When full context is needed
  • Debugging
Configuration:
{
  "memoryType": "buffer",
  "contextWindowLength": 5
}
Limitations:
  • Can exceed token limits
  • More expensive with long conversations

Vector Stores

Vector stores enable semantic search and retrieval-augmented generation (RAG).

Available Vector Stores

Pinecone

Managed vector database
  • Fully managed
  • Fast and scalable
  • Easy setup

Chroma

Open source vector store
  • Self-hosted option
  • Great for development
  • Cost-effective

Qdrant

High-performance vector search
  • Fast queries
  • Advanced filtering
  • Self-hosted or cloud

Supabase

PostgreSQL with vectors
  • Familiar SQL interface
  • Integrated with Supabase
  • Good for small datasets

RAG Workflow

Build a Retrieval-Augmented Generation system:
1

Prepare Documents

2

Create Retrieval Tool

Add a Vector Store Retrieval Tool that searches your vector store.
3

Connect to Agent

Wire the retrieval tool to your AI Agent’s tools input.
4

Query with Context

The agent automatically retrieves relevant documents and uses them to answer questions.
Complete RAG Example:

Common Patterns

1. Simple Q&A Agent

Basic conversational agent:

2. Research Agent

Agent with web search:

3. Data Analysis Agent

Agent with code execution:

4. Customer Support Agent

Full-featured support agent:

5. Multi-Agent System

Orchestrator with specialized agents:

Best Practices

1

Start Simple

Begin with a basic agent and add complexity gradually:
  1. Single chat model
  2. Add memory
  3. Add one tool
  4. Add more tools as needed
2

Write Clear Tool Descriptions

Tools are selected based on descriptions:
  • Be specific about what the tool does
  • Include when to use it
  • Mention input/output format
  • Give examples if helpful
3

Manage Context Windows

  • Use appropriate memory types
  • Consider token limits
  • Summarize when needed
  • Choose models with larger context for complex tasks
4

Handle Errors Gracefully

  • Implement retry logic
  • Provide fallback options
  • Log agent reasoning
  • Monitor tool failures
5

Optimize for Cost

  • Use cheaper models when possible
  • Limit max iterations
  • Cache results
  • Use window buffer memory
  • Monitor token usage
6

Test Tools Independently

  • Verify each tool works correctly
  • Test error cases
  • Check output formats
  • Ensure consistent behavior

Troubleshooting

Agent Not Using Tools

Possible causes:
  • Tool descriptions unclear
  • Wrong prompt type
  • Model doesn’t support function calling
  • System message overriding tool use
Solutions:
  1. Improve tool descriptions
  2. Use “tools” or “openai-functions” prompt type
  3. Use compatible model (GPT-3.5-turbo+, Claude 3+, etc.)
  4. Adjust system message

Agent Loops or Doesn’t Finish

Possible causes:
  • Tool returns unclear results
  • Max iterations too high
  • Circular dependencies
  • Poor prompt engineering
Solutions:
  1. Lower max iterations (try 10-15)
  2. Improve tool output clarity
  3. Review agent reasoning (enable intermediate steps)
  4. Refine system prompt

Memory Not Working

Possible causes:
  • Memory not connected
  • Session ID not set
  • Memory type mismatch
  • Context window exceeded
Solutions:
  1. Verify memory connection
  2. Set consistent session IDs
  3. Use window buffer for long conversations
  4. Check token usage

RAG Returns Irrelevant Results

Possible causes:
  • Poor document chunking
  • Embedding model mismatch
  • Top K too low
  • Query not specific enough
Solutions:
  1. Adjust chunk size and overlap
  2. Use same embedding model consistently
  3. Increase top K (try 5-10)
  4. Improve query formulation

Resources