Skip to main content

Overview

Dokploy’s multi-node deployment capability allows you to distribute your applications and databases across multiple servers, providing high availability, load balancing, and horizontal scalability. This is achieved through Docker Swarm integration.
Multi-node deployment requires Docker Swarm to be configured. If you haven’t set up Docker Swarm yet, see the Docker Swarm guide.

Benefits of Multi-Node Deployment

High Availability

Automatic failover if a node goes down, ensuring zero downtime

Load Balancing

Distribute traffic across multiple nodes automatically

Horizontal Scaling

Scale by adding more servers instead of upgrading hardware

Resource Optimization

Utilize resources across multiple machines efficiently

Architecture

In a multi-node Dokploy setup:
  1. Manager Node: The primary Dokploy instance that orchestrates deployments
  2. Worker Nodes: Additional servers that run your applications and services
  3. Overlay Network: Secure network that connects all nodes
  4. Service Discovery: Automatic DNS-based service discovery

Adding a Server to Dokploy

1

Prepare the Remote Server

Ensure the remote server meets the requirements:
  • Ubuntu 20.04+ or similar Linux distribution
  • Docker installed (Dokploy can install it for you)
  • SSH access with key-based authentication
  • Port 2377 (cluster management) open for Swarm communication
  • Port 7946 (node communication) open
  • Port 4789 (overlay network) open
2

Add Server in Dokploy Dashboard

Navigate to SettingsServersAdd ServerProvide the following information:
name
string
required
A friendly name for the server (e.g., “worker-1”)
ipAddress
string
required
The IP address or hostname of the remote server
port
number
default:"22"
SSH port (default: 22)
username
string
required
SSH username (e.g., “root” or “ubuntu”)
sshKeyId
string
required
Select the SSH key to use for authentication
3

Test Connection

Click Test Connection to verify SSH connectivity before saving.
4

Initialize the Server

After adding, Dokploy will:
  • Install Docker (if not present)
  • Configure Docker daemon
  • Join the Docker Swarm cluster
  • Set up monitoring agents

Using the API

You can also manage servers programmatically:
Create Server
curl -X POST https://your-dokploy-instance.com/api/server.create \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "worker-1",
    "ipAddress": "192.168.1.100",
    "port": 22,
    "username": "root",
    "sshKeyId": "ssh-key-id"
  }'
List Servers
curl https://your-dokploy-instance.com/api/server.all \
  -H "Authorization: Bearer YOUR_API_KEY"

Deploying to Specific Servers

Application Placement

When creating or updating an application, you can specify which server(s) should run it:
  1. Go to your application settings
  2. Navigate to AdvancedDeployment
  3. Under Server, select the target server
  4. For Swarm mode, use Placement Constraints

Swarm Placement Constraints

For more granular control, use Docker Swarm placement constraints:
docker-compose.yml
services:
  web:
    image: myapp:latest
    deploy:
      placement:
        constraints:
          - node.role == worker
          - node.labels.environment == production
          - node.labels.region == us-east

Service Replication

Run multiple replicas of your application across nodes:
services:
  api:
    image: myapi:latest
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: any
        delay: 5s
        max_attempts: 3

Load Balancing

Docker Swarm provides built-in load balancing:
  • Ingress Load Balancing: Automatically distributes external requests across replicas
  • Internal Load Balancing: Service discovery with DNS round-robin
  • Traefik Integration: Dokploy uses Traefik for advanced routing and SSL termination

Health Checks and Failover

Configure health checks to ensure automatic failover:
docker-compose.yml
services:
  app:
    image: myapp:latest
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      replicas: 3
      update_config:
        failure_action: rollback
        monitor: 60s

Monitoring Multi-Node Deployments

Dokploy provides centralized monitoring across all nodes:
  • Node Metrics: CPU, memory, disk, and network per server
  • Service Metrics: Container-level metrics for each replica
  • Aggregate Views: Combined metrics across all nodes
Enable the monitoring service on all nodes for comprehensive visibility. See Monitoring.

Database Considerations

Databases should be deployed with careful planning in multi-node environments:
  • Use persistent volumes with backup strategies
  • Consider managed database services for production
  • Configure replication for high availability
  • Pin database containers to specific nodes with placement constraints

Example: PostgreSQL with Node Pinning

services:
  postgres:
    image: postgres:16
    deploy:
      placement:
        constraints:
          - node.labels.database == postgres-primary
      replicas: 1
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:
    driver: local

Networking Across Nodes

Dokploy creates an overlay network for multi-node communication:
networks:
  dokploy-network:
    driver: overlay
    attachable: true
    encrypt: true
  • Encrypted: All inter-node traffic is encrypted
  • Automatic Service Discovery: Services can communicate using service names
  • Isolated: Each project/stack has its own network namespace

Best Practices

For high availability of the control plane, use 3 or 5 manager nodes. This provides fault tolerance using Raft consensus.
In production, dedicate nodes to either manager or worker roles. Avoid running application workloads on manager nodes.
Set CPU and memory limits to prevent a single service from consuming all node resources:
deploy:
  resources:
    limits:
      cpus: '2'
      memory: 2G
    reservations:
      cpus: '0.5'
      memory: 512M
Label nodes by environment, region, or capability:
docker node update --label-add environment=production worker-1
docker node update --label-add region=us-east worker-1
docker node update --label-add ssd=true worker-1
Configure update strategies to avoid downtime:
deploy:
  update_config:
    parallelism: 1
    delay: 10s
    failure_action: rollback
    monitor: 60s
Regularly check node status and set up alerts for node failures:
docker node ls

Troubleshooting

Node Connection Issues

  • Verify SSH key is correctly configured
  • Check firewall rules allow SSH (port 22)
  • Ensure the user has sudo/root privileges
  • Test SSH manually: ssh user@server-ip
  • Verify ports 2377, 7946, and 4789 are open
  • Check if Docker is running: systemctl status docker
  • Review swarm join logs: docker swarm join-token worker
  • Ensure network connectivity between nodes

Service Deployment Issues

  • Check placement constraints are satisfied
  • Verify node has required labels
  • Review service logs: docker service logs service-name
  • Check resource availability on the node
  • Ensure all replicas are healthy
  • Verify ingress network is properly configured
  • Check Traefik configuration and logs
  • Test with direct node IP to isolate the issue

Next Steps

Docker Swarm

Learn more about Docker Swarm configuration

Networking

Configure advanced networking options

Monitoring

Set up monitoring for your cluster

Backups

Configure backups for distributed deployments

Build docs developers (and LLMs) love