AI Agents
AI Agents are autonomous systems that can reason about problems, use tools, and execute multi-step tasks to achieve goals. In n8n, agents are built on LangChain and can be easily integrated into your workflows.What is an AI Agent?
An AI agent is a language model that can:- Reason: Think through problems step-by-step
- Use Tools: Call external functions, APIs, or workflows
- Make Decisions: Choose which tools to use and when
- Maintain Context: Remember previous interactions with memory
- Iterate: Try different approaches until a goal is achieved
Unlike simple LLM chains that execute linearly, agents can loop, make decisions, and adapt their approach based on intermediate results.
Agent Node
The AI Agent node (@n8n/n8n-nodes-langchain.agent) is the core component for building agent workflows in n8n.
Key Features
- Multiple Input Types: Accept prompts from previous nodes, define custom prompts, or use guardrails
- Tool Support: Connect multiple tools for the agent to use
- Memory Integration: Add memory nodes for conversation context
- Output Parsing: Structure agent responses with custom schemas
- Streaming Support: Stream responses in real-time
- Fallback Models: Configure backup language models for reliability
Configuration Options
Building Your First Agent
Add Tools (Optional)
Connect tool nodes to give your agent capabilities.Available tools:
- Calculator Tool
- HTTP Request Tool
- Wikipedia Tool
- Workflow Tool (call other n8n workflows)
- Code Tool
- Vector Store Tool
Agent Architecture
From the source code (/home/daytona/workspace/source/packages/@n8n/nodes-langchain/nodes/agents/Agent/V3/AgentV3.node.ts:25), the agent uses a “Tools Agent” architecture:
Agent Tools
Tools extend what your agent can do. Here are the built-in tools:Calculator Tool
Perform mathematical calculations:HTTP Request Tool
Make API calls:Workflow Tool
The most powerful tool - call other n8n workflows! From the source (/home/daytona/workspace/source/packages/@n8n/nodes-langchain/nodes/tools/ToolWorkflow/ToolWorkflow.node.ts:10):
Vector Store Tool
Query vector databases for semantic search:Code Tool
Execute JavaScript or Python code:Wikipedia Tool
Search Wikipedia:SerpAPI Tool
Search Google:Wolfram Alpha Tool
Computational knowledge:Memory for Agents
Memory allows agents to maintain context across multiple interactions. Choose the right memory type for your use case:Simple Memory (Development)
Node:memoryBufferWindowBest for: Development and testing
Simple Memory stores data in n8n’s memory. Not suitable for production environments with Queue Mode or multi-main setups.
Redis Memory (Production)
Node:memoryRedisChatBest for: Production with distributed workers
Postgres Memory
Node:memoryPostgresChatBest for: Persistent, queryable conversation history
MongoDB Memory
Node:memoryMongoDbChatBest for: Document-based storage with flexibility
Zep Memory
Node:memoryZepBest for: Advanced features like automatic summarization and fact extraction
Output Parsing
Structure agent responses into reliable formats:Structured Output Parser
Define custom JSON schemas:Auto-fixing Output Parser
Automatically fix malformed JSON responses:Item List Output Parser
Extract lists from responses:Advanced Agent Patterns
Multi-Step Reasoning Agent
RAG Agent with Vector Search
Research Agent
Best Practices
Prompt Engineering
- Be specific about the agent’s role and capabilities
- Provide examples of desired behavior
- Set clear boundaries and constraints
- Define success criteria
Tool Design
- Keep tools focused on a single responsibility
- Provide comprehensive tool descriptions
- Handle errors gracefully
- Return structured data when possible
Memory Management
- Choose appropriate context window length
- Use session IDs to separate conversations
- Clean up old sessions periodically
- Consider costs of large context windows
Performance
- Use faster models (GPT-3.5) for tool selection
- Limit the number of available tools
- Implement tool result caching when possible
- Monitor token usage and costs
Error Handling
- Enable “Continue on Fail” in production
- Implement fallback language models
- Add error handling in custom tools
- Log agent reasoning for debugging
OpenAI Assistant Node
For OpenAI-specific features, use the OpenAI Assistant node:- Code Interpreter
- File Search
- Function Calling
- Persistent threads
Debugging Agents
Enable Streaming
Watch the agent’s thought process in real-time:Check Intermediate Steps
The agent response includes metadata about tool usage:Common Issues
Agent Loops Forever
- Check tool descriptions are clear
- Ensure tools return useful data
- Set max iterations limit
- Verify prompt instructs agent when to stop
Agent Doesn’t Use Tools
- Make tool descriptions more specific
- Adjust temperature (try 0.7-0.9)
- Check tool is actually connected
- Try a more capable model (GPT-4)
High Token Usage
- Reduce context window length
- Limit number of tools
- Use smaller embedding models
- Implement result summarization
Next Steps
LangChain Nodes
Explore all available LangChain nodes
Vector Stores
Add knowledge retrieval to your agents
Example Workflows
Browse agent workflow templates
Advanced Tutorial
Complete agent building tutorial