As your automation needs grow, n8n can scale horizontally to handle thousands of concurrent workflow executions. This guide covers queue mode architecture, worker configuration, and scaling strategies.
Understanding Queue Mode
Queue mode transforms n8n from a single-process application into a distributed system where workflow executions are processed by dedicated worker processes.
Architecture Components
Main Process
Serves web UI and API
Manages workflow definitions
Handles user authentication
Enqueues executions to Redis
Does NOT execute workflows
Worker Processes
Pull execution jobs from Redis queue
Execute workflow nodes
Save results to database
Can be scaled independently
Run on separate servers/containers
Redis (Message Broker)
Manages job queue (Bull)
Coordinates between main and workers
Handles job priorities and retries
Stores temporary execution state
PostgreSQL (Database)
Stores workflow definitions
Stores execution results
Manages credentials (encrypted)
Shared by all components
Required for queue mode
Execution Flow
Prerequisites
PostgreSQL Database
Queue mode requires PostgreSQL. SQLite is not supported. DB_TYPE = postgresdb
DB_POSTGRESDB_HOST = postgres.example.com
DB_POSTGRESDB_DATABASE = n8n
DB_POSTGRESDB_USER = n8n_user
DB_POSTGRESDB_PASSWORD = secure-password
Redis Server
Redis 6.0 or higher recommended. QUEUE_BULL_REDIS_HOST = redis.example.com
QUEUE_BULL_REDIS_PORT = 6379
QUEUE_BULL_REDIS_PASSWORD = redis-password # if using auth
Shared Encryption Key
Critical : All processes must use the same encryption key.N8N_ENCRYPTION_KEY = same-key-for-all-processes
Common Mistakes:
Using different encryption keys across processes
Not persisting /home/node/.n8n on main process
Running workers without database access
Forgetting to configure Redis authentication
Basic Queue Mode Setup
Docker Compose Configuration
version : '3.8'
volumes :
n8n_data :
postgres_data :
redis_data :
services :
postgres :
image : postgres:16-alpine
restart : unless-stopped
environment :
POSTGRES_DB : n8n
POSTGRES_USER : n8n
POSTGRES_PASSWORD : ${DB_PASSWORD}
volumes :
- postgres_data:/var/lib/postgresql/data
healthcheck :
test : [ 'CMD-SHELL' , 'pg_isready -U n8n' ]
interval : 5s
timeout : 5s
retries : 10
redis :
image : redis:6.2.14-alpine
restart : unless-stopped
command : redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes :
- redis_data:/data
healthcheck :
test : [ 'CMD' , 'redis-cli' , '--raw' , 'incr' , 'ping' ]
interval : 5s
timeout : 3s
retries : 5
n8n :
image : ghcr.io/n8n-io/n8n:latest
restart : unless-stopped
ports :
- "5678:5678"
environment :
# Database
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
# Redis Queue
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PASSWORD=${REDIS_PASSWORD}
# n8n Config
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_HOST=${N8N_HOST}
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${N8N_HOST}/
# Monitoring
- N8N_METRICS=true
volumes :
- n8n_data:/home/node/.n8n
depends_on :
postgres :
condition : service_healthy
redis :
condition : service_healthy
n8n-worker :
image : ghcr.io/n8n-io/n8n:latest
restart : unless-stopped
command : worker
environment :
# Database
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
# Redis Queue
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PASSWORD=${REDIS_PASSWORD}
# Worker Config
- QUEUE_HEALTH_CHECK_ACTIVE=true
- N8N_CONCURRENCY_PRODUCTION_LIMIT=10
# n8n Config
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
depends_on :
postgres :
condition : service_healthy
redis :
condition : service_healthy
deploy :
replicas : 3 # Start with 3 workers
Starting in Queue Mode
# Start all services
docker compose up -d
# Scale workers dynamically
docker compose up -d --scale n8n-worker= 5
# View logs
docker compose logs -f n8n-worker
Worker Configuration
Concurrency Settings
N8N_CONCURRENCY_PRODUCTION_LIMIT
Maximum concurrent executions per worker. -1 means unlimited. N8N_CONCURRENCY_PRODUCTION_LIMIT = 10
Recommendation : Start with 5-10 per worker, adjust based on:
Available CPU cores (1-2 executions per core)
Memory per worker (500MB-1GB per execution)
Execution complexity and duration
Worker Resources
Light Workloads
Medium Workloads
Heavy Workloads
Simple workflows with minimal data processing. deploy :
resources :
limits :
cpus : '1.0'
memory : 1G
reservations :
cpus : '0.5'
memory : 512M
environment :
- N8N_CONCURRENCY_PRODUCTION_LIMIT=5
Standard workflows with API calls and data transformations. deploy :
resources :
limits :
cpus : '2.0'
memory : 2G
reservations :
cpus : '1.0'
memory : 1G
environment :
- N8N_CONCURRENCY_PRODUCTION_LIMIT=10
Complex workflows with large data processing. deploy :
resources :
limits :
cpus : '4.0'
memory : 4G
reservations :
cpus : '2.0'
memory : 2G
environment :
- N8N_CONCURRENCY_PRODUCTION_LIMIT=15
Worker Lock Settings
QUEUE_WORKER_LOCK_DURATION
How long (ms) a worker holds a job lease. QUEUE_WORKER_LOCK_DURATION = 120000 # 2 minutes
QUEUE_WORKER_LOCK_RENEW_TIME
How often (ms) to renew the job lease. QUEUE_WORKER_LOCK_RENEW_TIME = 15000 # 15 seconds
QUEUE_WORKER_STALLED_INTERVAL
How often (ms) to check for stalled jobs. QUEUE_WORKER_STALLED_INTERVAL = 60000 # 1 minute
Advanced Scaling
Queue Mode with Task Runners
Combine queue mode with external task runners for maximum isolation:
services :
n8n-worker :
image : ghcr.io/n8n-io/n8n:latest
command : worker
environment :
# ... database and queue config ...
# Task Runner Config
- N8N_RUNNERS_MODE=external
- N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
- N8N_RUNNERS_AUTH_TOKEN=${RUNNER_TOKEN}
- N8N_RUNNERS_MAX_CONCURRENCY=20
deploy :
replicas : 3
# One or more runners per worker
n8n-worker-runners :
image : ghcr.io/n8n-io/runners:latest
environment :
- N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker:5679
- N8N_RUNNERS_AUTH_TOKEN=${RUNNER_TOKEN}
depends_on :
- n8n-worker
deploy :
replicas : 6 # 2 runners per worker
Multi-Main Setup (Enterprise)
Run multiple main processes for high availability:
services :
nginx :
image : nginx:latest
ports :
- "5678:80"
volumes :
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on :
- n8n-main-1
- n8n-main-2
n8n-main-1 :
image : ghcr.io/n8n-io/n8n:latest
environment :
- EXECUTIONS_MODE=queue
- N8N_MULTI_MAIN_SETUP_ENABLED=true
- N8N_MULTI_MAIN_SETUP_KEY_TTL=10
# ... other config ...
volumes :
- n8n_data_1:/home/node/.n8n
n8n-main-2 :
image : ghcr.io/n8n-io/n8n:latest
environment :
- EXECUTIONS_MODE=queue
- N8N_MULTI_MAIN_SETUP_ENABLED=true
- N8N_MULTI_MAIN_SETUP_KEY_TTL=10
# ... other config ...
volumes :
- n8n_data_2:/home/node/.n8n
n8n-worker :
# ... workers config ...
deploy :
replicas : 5
upstream n8n_backend {
least_conn ;
server n8n-main-1:5678;
server n8n-main-2:5678;
}
server {
listen 80 ;
location / {
proxy_pass http://n8n_backend;
proxy_set_header Host $ host ;
proxy_set_header X-Real-IP $ remote_addr ;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto $ scheme ;
# WebSocket support
proxy_http_version 1.1 ;
proxy_set_header Upgrade $ http_upgrade ;
proxy_set_header Connection "upgrade" ;
}
}
Redis Configuration
Redis Cluster
For high availability Redis:
QUEUE_BULL_REDIS_CLUSTER_NODES = redis-1:6379,redis-2:6379,redis-3:6379
QUEUE_BULL_REDIS_PASSWORD = cluster-password
Redis with TLS
QUEUE_BULL_REDIS_TLS = true
QUEUE_BULL_REDIS_HOST = redis.secure.internal
# For AWS ElastiCache
QUEUE_BULL_REDIS_DNS_LOOKUP_STRATEGY = NONE
# Keep-alive for stable connections
QUEUE_BULL_REDIS_KEEP_ALIVE = true
QUEUE_BULL_REDIS_KEEP_ALIVE_DELAY = 5000
QUEUE_BULL_REDIS_KEEP_ALIVE_INTERVAL = 5000
# Timeout thresholds
QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD = 30000
# Reconnect on failover
QUEUE_BULL_REDIS_RECONNECT_ON_FAILOVER = true
# redis.conf settings
maxmemory 2gb
maxmemory-policy allkeys-lru
# Persistence
appendonly yes
appendfsync everysec
Monitoring and Observability
Prometheus Metrics
Enable metrics on all processes:
N8N_METRICS = true
N8N_METRICS_INCLUDE_DEFAULT_METRICS = true
N8N_METRICS_INCLUDE_QUEUE_METRICS = true
N8N_METRICS_QUEUE_METRICS_INTERVAL = 20
Queue metrics are not supported in multi-main setup.
Key Metrics to Monitor
n8n_queue_jobs_waiting - Jobs waiting to be processed
n8n_queue_jobs_active - Currently executing jobs
n8n_queue_jobs_completed - Successfully completed jobs
n8n_queue_jobs_failed - Failed jobs
n8n_queue_jobs_delayed - Scheduled for future execution
process_cpu_user_seconds_total - CPU usage
process_resident_memory_bytes - Memory usage
n8n_workflow_executions_total - Execution count
n8n_workflow_execution_duration_seconds - Execution duration
Connection pool utilization
Query execution time
Active connections
Health Checks
Enable worker health endpoints:
services :
n8n-worker :
environment :
- QUEUE_HEALTH_CHECK_ACTIVE=true
- QUEUE_HEALTH_CHECK_PORT=5678
healthcheck :
test : [ 'CMD-SHELL' , 'wget --spider -q http://localhost:5678/healthz || exit 1' ]
interval : 30s
timeout : 10s
retries : 3
start_period : 60s
Scaling Strategies
When to Scale
Monitor Queue Depth
# Check waiting jobs in Redis
redis-cli LLEN bull:n8n:jobs:waiting
Action : If consistently > 100, add workers.
Check Worker CPU
# Monitor worker resource usage
docker stats n8n-worker
Action : If CPU > 80%, add more workers or reduce concurrency.
Measure Execution Latency
Track time from trigger to execution start. Action : If latency > 5 seconds, scale workers.
Review Failed Jobs
# Check failed job count
redis-cli LLEN bull:n8n:jobs:failed
Action : Investigate failures, may indicate resource constraints.
Workers Needed = (Peak Executions per Minute) / (Worker Concurrency × 60)
Example:
- Peak: 300 executions/minute
- Worker concurrency: 10
- Workers: 300 / (10 × 60) = 0.5 workers minimum
Add 50% buffer: 1 worker minimum, scale to 2-3 for resilience
Vertical vs Horizontal Scaling
Add more workers ✅ Pros:
Better fault tolerance
Easier to scale incrementally
Can distribute across servers
No single point of failure
❌ Cons:
More complex infrastructure
Requires load balancing (multi-main)
Higher operational overhead
Increase worker resources ✅ Pros:
Simpler configuration
Less coordination needed
Lower network overhead
❌ Cons:
Limited by hardware
Single point of failure
More expensive at scale
Harder to utilize efficiently
Database Optimization
# Match pool size to expected concurrent queries
DB_POSTGRESDB_POOL_SIZE = 20
# Adjust timeouts
DB_POSTGRESDB_CONNECTION_TIMEOUT = 30000
DB_POSTGRESDB_STATEMENT_TIMEOUT = 60000
# Enable automatic cleanup
EXECUTIONS_DATA_PRUNE = true
# Keep last 7 days
EXECUTIONS_DATA_MAX_AGE = 168
# Or limit by count
EXECUTIONS_DATA_PRUNE_MAX_COUNT = 50000
# Hard delete after 24 hours
EXECUTIONS_DATA_HARD_DELETE_BUFFER = 24
-- postgresql.conf
max_connections = 100
shared_buffers = 2GB
effective_cache_size = 6GB
maintenance_work_mem = 512MB
checkpoint_completion_target = 0 . 9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1 . 1
effective_io_concurrency = 200
work_mem = 10MB
Execution Optimization
# Global execution timeout
EXECUTIONS_TIMEOUT = 300 # 5 minutes
# Maximum allowed timeout
EXECUTIONS_TIMEOUT_MAX = 3600 # 1 hour
# Task runner timeout
N8N_RUNNERS_TASK_TIMEOUT = 600 # 10 minutes
# Task runner max memory
N8N_RUNNERS_MAX_OLD_SPACE_SIZE = 2048 # 2GB
# Node.js memory (for workers)
NODE_OPTIONS = --max-old-space-size = 2048
Troubleshooting
Symptoms : Jobs waiting but workers idleCauses :
Workers can’t connect to Redis
Different encryption keys
Workers crashed
Solutions :# Check worker logs
docker compose logs -f n8n-worker
# Verify Redis connection
docker compose exec n8n-worker nc -zv redis 6379
# Check encryption key
docker compose exec n8n env | grep ENCRYPTION_KEY
docker compose exec n8n-worker env | grep ENCRYPTION_KEY
# Restart workers
docker compose restart n8n-worker
Symptoms : Redis running out of memoryCauses :
Too many failed jobs accumulating
Large payloads in jobs
No eviction policy
Solutions :# Clear failed jobs (careful!)
redis-cli DEL bull:n8n:jobs:failed
# Set Redis eviction policy
redis-cli CONFIG SET maxmemory-policy allkeys-lru
redis-cli CONFIG SET maxmemory 2gb
# Enable AOF persistence
redis-cli CONFIG SET appendonly yes
Symptoms : Adding workers doesn’t increase throughputCauses :
Database bottleneck
Redis bottleneck
Network limitations
CPU constraints
Solutions :
Monitor database query times
Check Redis CPU usage
Profile slow workflows
Increase database connections
Consider database read replicas
Best Practices
Start Small Begin with 2-3 workers and scale based on metrics, not guesses.
Monitor Everything Track queue depth, worker CPU/memory, database performance, and execution latency.
Use Health Checks Enable health checks on all components for automatic recovery.
Plan for Failures Design workflows to be idempotent and handle retries gracefully.
Prune Execution Data Regularly clean old execution data to maintain database performance.
Secure Redis Always use authentication and TLS for Redis in production.
Next Steps
Configuration Reference Complete list of environment variables
Docker Deployment Docker Compose examples
Self-Hosting Overview Understanding deployment options