AI Agent Workflow Example
This example demonstrates how to build a production-ready AI Agent workflow with LangChain integration, including language models, tools, and conversation memory.Workflow Overview
The AI agent workflow includes:- Chat Trigger - Receives user messages via webhook
- AI Agent - Orchestrates conversation and tool usage
- Language Model - Powers the AI’s reasoning (OpenAI GPT-4)
- HTTP Request Tool - Allows AI to fetch external data
- Code Tool - Enables custom logic execution
- Memory - Maintains conversation history
Use Cases
- Customer support chatbots
- Data analysis assistants
- Research and information retrieval
- Multi-step task automation
- Intelligent API orchestration
Complete Workflow JSON
Step-by-Step Breakdown
Chat Trigger Setup
The Chat Trigger node receives user messages via webhook and initializes the conversation.Key Parameters:
responseMode: lastNode- Returns agent’s responsepublicChatKey- Unique identifier for the chat interface
Language Model Configuration
The OpenAI Chat Model provides the AI reasoning capabilities.Settings:
- Model: GPT-4 Turbo for advanced reasoning
- Temperature: 0.7 for balanced creativity
- Max tokens: 2000 for detailed responses
Tool Setup - Weather API
HTTP Request Tool enables the AI to fetch real-time weather data.Critical Elements:
toolDescription- Tells AI when to use this tool (15+ chars)placeholderDefinitions- Defines tool parameters- Query parameters with n8n expressions
Tool Setup - Calculator
Code Tool allows the AI to perform mathematical operations.Features:
- Multiple operations (add, subtract, multiply, etc.)
- Input schema for type validation
- Error handling for edge cases
Conversation Memory
Window Buffer Memory maintains context across messages.Configuration:
- 10 message history window
- Session-based tracking
- Automatic context management
Connection Types Explained
Important: AI connections use special port types that differ from standard n8n connections:
ai_languageModel- Connects language model to agentai_tool- Connects tools to agentai_memory- Connects memory to agentmain- Standard data flow (Chat Trigger → Agent)
Example Conversation
User: “What’s the weather in London and calculate 15% of 200?” AI Agent Process:- Analyzes request (two separate tasks)
- Calls Weather API Tool with city=“London”
- Calls Calculate Tool with operation=“percentage”, value1=15, value2=200
- Synthesizes results into coherent response
Advanced Configuration
Add Streaming Responses
For real-time response streaming:Critical: When using streaming mode, the AI Agent must NOT have any outgoing main connections. Responses stream automatically through the Chat Trigger.
Add Fallback Language Model
For production reliability:targetIndex: 1 and set needsFallback: true on the Agent.
Add Vector Store for RAG
Enable knowledge base retrieval:Validation Checklist
Before Deployment
Run
n8n_validate_workflow({id: "workflow_id"}) to check:- Language model is connected
- All tools have descriptions (15+ characters)
- Tool parameters are properly configured
- Memory settings are valid
- No connection errors
Test Scenarios
Test the agent with:
- Single tool usage (“What’s the weather in Paris?”)
- Multiple tools (“Weather in Tokyo and calculate 25% of 400”)
- Conversation memory (“What city did I just ask about?”)
- Edge cases (invalid cities, division by zero)
Common Issues and Solutions
”AI Agent has no language model”
Cause: Language model not connected or wrong connection type. Solution: UsesourceOutput: "ai_languageModel" when connecting.
”Tool has no description”
Cause: Missing or too short tool description. Solution: Add clear description (15+ characters) explaining when to use the tool.”Streaming mode not working”
Cause: AI Agent has outgoing main connections. Solution: Remove all main output connections from the Agent node.Next Steps
- Explore Multi-Node Workflows with complex routing
- Learn Error Handling for production resilience
- Read the AI Agents Guide for advanced patterns