Skip to main content

WireGuard Mesh Network

Uncloud creates a flat WireGuard overlay network that connects all machines and containers in your cluster. This mesh enables direct communication between any two containers regardless of physical location.

Why WireGuard?

WireGuard provides:
  • Secure encryption - All traffic between machines is encrypted
  • Low overhead - Minimal CPU and bandwidth usage
  • Simple configuration - No complex PKI or certificate management
  • NAT traversal - Works with machines behind firewalls
  • Kernel-level performance - Built into the Linux kernel for speed
Uncloud’s WireGuard implementation is inspired by Talos KubeSpan, which uses similar peer discovery and NAT traversal techniques.

Address Allocation

Uncloud uses the 10.210.0.0/16 private address space for the entire mesh network.

Subnet Structure

10.210.0.0/16          Entire mesh network
├── 10.210.0.0/24      Machine 0 subnet
│   ├── 10.210.0.1     Machine 0 (WireGuard interface)
│   ├── 10.210.0.2     Container 1 on machine 0
│   ├── 10.210.0.3     Container 2 on machine 0
│   └── ...
├── 10.210.1.0/24      Machine 1 subnet
│   ├── 10.210.1.1     Machine 1 (WireGuard interface)
│   ├── 10.210.1.2     Container 1 on machine 1
│   └── ...
└── 10.210.2.0/24      Machine 2 subnet
    ├── 10.210.2.1     Machine 2 (WireGuard interface)
    ├── 10.210.2.2     Container 1 on machine 2
    └── ...
Each machine gets a dedicated /24 subnet (256 addresses) with this layout:
  • .0 - Network address (reserved)
  • .1 - Machine’s WireGuard interface
  • .2-.254 - Available for containers (253 IPs)
  • .255 - Broadcast address (reserved)
This design supports up to 256 machines, each running up to 253 containers. This is more than sufficient for most deployments.

WireGuard Configuration

Interface Setup

Each machine runs a WireGuard interface named uncloud:
# View WireGuard configuration
sudo wg show uncloud
The interface is configured with:
  • Listen port: 51820 (UDP)
  • Private key: Unique per machine (stored securely)
  • Address: First IP from machine’s subnet (e.g., 10.210.0.1/32)
  • Allowed IPs: Entire /16 range for routing all mesh traffic

Peer Configuration

For each peer machine, WireGuard maintains:
  • Public key - Identifies the peer
  • Endpoint - IP address and port to reach the peer
  • Allowed IPs - Which IPs should route through this peer
  • Persistent keepalive - 25-second interval to maintain connections

Key Management

WireGuard keys are generated automatically:
  1. When a machine is initialized/added, a new key pair is created
  2. The public key is stored in cluster state
  3. The private key is stored securely on the machine
  4. All machines retrieve peer public keys from cluster state
  5. WireGuard is configured to accept traffic from known peers
Private keys never leave the machine they’re generated on. Only public keys are shared through cluster state.

Docker Bridge Network

Each machine has a Docker bridge network connected to the WireGuard interface.

Bridge Configuration

The bridge is configured with:
  • Name: uncloud (same as WireGuard interface)
  • Subnet: Machine’s /24 subnet (e.g., 10.210.0.0/24)
  • Gateway: Machine’s IP (e.g., 10.210.0.1)
  • IP range: .2-.254 for container assignment

How It Works

Container (10.210.0.2)

Docker bridge (10.210.0.0/24)

WireGuard interface (10.210.0.1)

WireGuard tunnel

Remote machine (10.210.1.1)

Remote container (10.210.1.2)
When a container sends traffic:
  1. Packet enters Docker bridge with container’s IP as source
  2. Bridge routes to WireGuard interface (no NAT needed)
  3. WireGuard encrypts and sends through appropriate tunnel
  4. Remote WireGuard decrypts and forwards to destination
  5. Destination sees original container IP
There is no Network Address Translation (NAT). Containers communicate using their real mesh IPs, which simplifies networking and debugging.

Container IP Addressing

IP Assignment

When a container starts:
  1. Docker assigns an IP from the machine’s subnet pool
  2. The container gets a cluster-unique IP address
  3. IP is registered in cluster state
  4. DNS server updates to resolve service names to this IP
Example on machine 0 (10.210.0.0/24):
# First container gets .2
docker run -d myapp:latest
# IP: 10.210.0.2

# Second container gets .3  
docker run -d myapp:latest
# IP: 10.210.0.3

IP Stability

Container IPs are not stable across restarts. When a container stops and a new one starts, it may get a different IP. For stable addressing, use:
  • Service names - DNS resolves to current container IPs
  • Ingress hostnames - External access via stable domain names
Service discovery via DNS handles IP changes automatically, so your application code doesn’t need to track IPs.

Cross-Machine Communication

Container-to-Container

Containers on different machines communicate directly:
services:
  web:
    image: nginx
    # Runs on machine 0: 10.210.0.2
  
  api:
    image: myapp/api  
    # Runs on machine 1: 10.210.1.2
The web container can reach api at:
  • Service DNS: api.internal
  • Direct IP: 10.210.1.2
Traffic flows:
web (10.210.0.2)
  → Docker bridge on machine 0
  → WireGuard (10.210.0.1)
  → Encrypted tunnel
  → WireGuard (10.210.1.1) 
  → Docker bridge on machine 1
  → api (10.210.1.2)

Machine-to-Container

The machine itself can reach its own containers and containers on other machines:
# From machine 0, ping container on machine 1
ping 10.210.1.2

# Query service via DNS
curl http://api.internal:8000

Container-to-Internet

Containers can reach the internet through the host’s default gateway:
  1. Container sends to external IP
  2. Docker bridge uses MASQUERADE NAT
  3. Packet exits via host’s primary interface
  4. Response follows same path in reverse
Only traffic to the mesh network (10.210.0.0/16) goes through WireGuard. Internet traffic uses the host’s normal routing.

NAT Traversal

Uncloud uses several techniques to establish connections between machines behind NAT:

Endpoint Discovery

When a machine is added:
  1. It discovers its public IP and local IPs
  2. Registers all IPs as potential endpoints in cluster state
  3. Other machines try connecting to each endpoint
  4. Successful endpoints are used for WireGuard configuration

Persistent Keepalive

WireGuard sends keepalive packets every 25 seconds:
  • Maintains NAT mapping at firewalls
  • Detects if a peer changes IP
  • Works with most firewall timeout policies

Symmetric NAT Handling

If both machines are behind symmetric NAT:
  • At least one machine should have a public IP or port forwarding
  • That machine acts as a relay for initial connection
  • Once connection establishes, peers can communicate directly
In practice, most deployments have at least one cloud VM with a public IP, which serves as a stable anchor for other machines to connect through.

Routing and Forwarding

Routing Table

Each machine’s routing table includes:
# Route for entire mesh via WireGuard
10.210.0.0/16 dev uncloud

# Route for local subnet via Docker bridge  
10.210.0.0/24 dev uncloud (higher priority)
This ensures:
  • Local containers are reached via bridge (fast)
  • Remote containers are reached via WireGuard tunnel
  • No manual routing configuration needed

IP Forwarding

Linux IP forwarding must be enabled:
# Check if enabled
sysctl net.ipv4.ip_forward
# Should be: net.ipv4.ip_forward = 1
Uncloud configures this automatically during installation.

Firewall Configuration

Required Ports

For proper mesh operation, allow:
  • UDP 51820 - WireGuard (inbound and outbound)
For ingress services, allow:
  • TCP 80 - HTTP (if publishing HTTP services)
  • TCP 443 - HTTPS (if publishing HTTPS services)

iptables Rules

Uncloud configures iptables to:
  • Allow forwarding between WireGuard and Docker bridge
  • MASQUERADE outbound internet traffic from containers
  • Accept WireGuard protocol packets
You can view iptables rules with: sudo iptables -L -v -n and sudo iptables -t nat -L -v -n

Performance Considerations

Latency

WireGuard adds minimal latency:
  • Same machine: ~0.1ms (direct bridge)
  • Same datacenter: ~1-2ms (WireGuard overhead + network)
  • Cross-region: Depends on physical distance

Throughput

WireGuard can handle:
  • Multi-gigabit throughput on modern CPUs
  • Minimal CPU usage (less than 5% for most workloads)
  • Efficient packet processing in kernel space

MTU and Fragmentation

WireGuard reduces MTU due to encryption overhead:
  • Standard Ethernet MTU: 1500 bytes
  • WireGuard overhead: ~60 bytes
  • Effective MTU: ~1420 bytes
This is configured automatically by Docker and WireGuard.

Troubleshooting

Check WireGuard Status

# View interface status
sudo wg show

# View peer connections
sudo wg show uncloud peers

# Check last handshake times
sudo wg show uncloud latest-handshakes

Test Connectivity

# Ping another machine's WireGuard IP
ping 10.210.1.1

# Ping a container on another machine
ping 10.210.1.2

# Trace route to container
traceroute 10.210.1.2

Check Routes

# View routing table
ip route show

# Should include:
# 10.210.0.0/16 dev uncloud

DNS Resolution

# Test service discovery
nslookup api.internal 10.210.0.1

# Query DNS server directly
dig @10.210.0.1 api.internal
If containers can’t communicate across machines, check that WireGuard tunnels are established, IP forwarding is enabled, and firewall rules allow traffic.

Further Reading

Machines

Learn about machine initialization and subnet allocation

Services

Understand service discovery and DNS

Architecture

See how networking fits into overall design

Network Troubleshooting

Debug common networking issues

Build docs developers (and LLMs) love