Understand how the Headscale network operates, including peer-to-peer connectivity, DERP relays, subnet routing, and exit node configuration.
Network topology
Mesh VPN architecture
Headscale creates a flat mesh network where every node can communicate directly with every other node:
┌─────────────┐
│ Headscale │ ← Control plane (coordinates connections)
│ Server │
└──────┬──────┘
│
│ Node registration & coordination
│
┌───┴───┬───────┬───────┬────────┐
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐
│Node1│ │Node2│ │Node3│ │Node4│ │Node5│
└──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘
│ │ │ │ │
└───────┴───────┴───────┴────────┘
Peer-to-peer WireGuard tunnels
IP address allocation
Headscale assigns IP addresses from configured prefixes:
prefixes :
v6 : fd7a:115c:a1e0::/48
v4 : 100.64.0.0/10
IPv4 range : 100.64.0.0/10 (Carrier-Grade NAT range)
Available: 100.64.0.0 - 100.127.255.255
Total addresses: ~4 million
IPv6 range : fd7a:115c:a1e0::/48 (Unique Local Address)
Private IPv6 space
Compatible with modern IPv6 networks
These IP addresses are only used within the VPN mesh. They don’t conflict with your local network.
DERP relay servers
What is DERP?
DERP (Designated Encrypted Relay for Packets) is a fallback relay protocol used when direct peer-to-peer connections fail due to:
Symmetric NAT
Restrictive firewalls
CGNAT (Carrier-Grade NAT)
Network policies blocking UDP
Default configuration
By default, Headscale uses Tailscale’s public DERP servers:
derp :
server :
enabled : false
urls :
- https://controlplane.tailscale.com/derpmap/default
auto_update_enabled : true
update_frequency : 24h
Public DERP servers are distributed globally:
North America (multiple regions)
Europe (multiple regions)
Asia Pacific
South America
Clients automatically select the lowest-latency DERP server.
Running your own DERP server
For complete self-hosting, enable the embedded DERP server:
derp :
server :
enabled : true
region_id : 999
region_code : "home"
region_name : "Home DERP"
stun_listen_addr : "0.0.0.0:3478"
Enable DERP server
Edit config/config.yaml and set derp.server.enabled: true.
Expose DERP ports
Add to docker-compose.yml under the headscale service: ports :
- "3478:3478/udp" # STUN
- "8080:8080/tcp" # DERP over HTTP
Configure firewall
Allow UDP traffic on port 3478:
Restart Headscale
docker compose restart headscale
Running your own DERP server means you’re responsible for its uptime. If it goes down, nodes that can’t connect directly will lose connectivity.
Subnet routing
Overview
Subnet routing allows nodes on the Tailscale network to access devices that aren’t running Tailscale:
Tailscale Network Local Network
(100.64.0.0/10) (192.168.1.0/24)
┌─────────────┐ ┌──────────────┐
│ Client │ │ Printer │
│ 100.64.0.5 │ │ 192.168.1.10 │
└──────┬──────┘ └──────────────┘
│ ▲
│ │
│ ┌──────────────────┐ │
└───>│ Subnet Router │──────┘
│ 100.64.0.10 │
│ 192.168.1.1 │
└──────────────────┘
Setting up a subnet router
Enable IP forwarding
On the router node: # Temporary (until reboot)
sudo sysctl -w net.ipv4.ip_forward= 1
sudo sysctl -w net.ipv6.conf.all.forwarding= 1
# Permanent
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Configure firewall
Allow forwarding between interfaces: sudo iptables -A FORWARD -i tailscale0 -j ACCEPT
sudo iptables -A FORWARD -o tailscale0 -j ACCEPT
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --permanent --add-rich-rule= 'rule family=ipv4 source address=100.64.0.0/10 masquerade'
sudo firewall-cmd --reload
sudo ufw route allow in on tailscale0 out on eth0
sudo ufw route allow in on eth0 out on tailscale0
Advertise routes
sudo tailscale up \
--login-server http://localhost:8000 \
--advertise-routes=192.168.1.0/24,192.168.3.0/24 \
--authkey YOUR_PREAUTH_KEY
Approve routes
On the Headscale server: # List advertised routes
docker exec headscale headscale routes list
# Enable specific route
docker exec headscale headscale routes enable --route-id < i d >
Auto-approval with ACL policies
Automate route approval for trusted tags:
{
"tagOwners" : {
"tag:router" : [ "user:admin" ]
},
"autoApprovers" : {
"routes" : {
"192.168.0.0/16" : [ "tag:router" ],
"10.0.0.0/8" : [ "tag:router" ]
}
}
}
Tag your router node:
docker exec headscale headscale nodes tag --identifier < node-i d > --tags tag:router
Use auto-approval for infrastructure you control to streamline deployments.
Exit nodes
Overview
Exit nodes route all internet traffic through a specific node, useful for:
Accessing geo-restricted content
Securing traffic on untrusted networks
Routing through your home network while traveling
Your Device Exit Node Internet
│ │ │
│ │ │
├──All traffic────────────>│ │
│ (encrypted WireGuard) │ │
│ ├──Decrypted traffic────>│
│ │ │
│ │<──────Response─────────│
│<─────Encrypted───────────│ │
Configuring an exit node
Enable IP forwarding and NAT
# Enable forwarding
sudo sysctl -w net.ipv4.ip_forward= 1
sudo sysctl -w net.ipv6.conf.all.forwarding= 1
# Configure NAT masquerading
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Advertise as exit node
sudo tailscale up \
--login-server http://localhost:8000 \
--advertise-exit-node \
--authkey YOUR_PREAUTH_KEY
Approve exit node
In Headscale: docker exec headscale headscale routes list
docker exec headscale headscale routes enable --route-id < i d >
You’ll see routes for 0.0.0.0/0 (IPv4) and ::/0 (IPv6).
Using an exit node
On the client device:
# List available exit nodes
tailscale exit-node list
# Use a specific exit node
tailscale set --exit-node= < node-name-or-ip >
# Stop using exit node
tailscale set --exit-node=
Via GUI : In Tailscale app → Settings → Use exit node → Select node
Auto-approval for exit nodes
{
"tagOwners" : {
"tag:exit-node" : [ "user:admin" ]
},
"autoApprovers" : {
"exitNode" : [ "tag:exit-node" ]
}
}
Exit nodes can see all your internet traffic. Only use exit nodes you trust and control.
MagicDNS
Configuration
MagicDNS provides automatic hostname resolution within your network:
dns :
magic_dns : true
base_domain : headscale.net
nameservers :
global :
- 1.1.1.1
- 1.0.0.1
How it works
Each node gets a hostname based on:
Node name: my-laptop
User name: alice
Base domain: headscale.net
Resulting hostname : my-laptop.alice.headscale.net
Usage :
ping my-laptop.headscale.net
ssh [email protected]
curl http://my-app.headscale.net:8080
Custom DNS resolution
MagicDNS handles:
.headscale.net queries → Resolved to Tailscale IPs
All other queries → Forwarded to global nameservers (1.1.1.1, 1.0.0.1)
MagicDNS eliminates the need to remember IP addresses. Use human-friendly hostnames instead.
Advanced networking
Split DNS
Direct specific domains to internal DNS servers:
dns :
magic_dns : true
nameservers :
global :
- 1.1.1.1
restricted :
corp.example.com :
- 192.168.1.53
Queries for *.corp.example.com go to 192.168.1.53, everything else to 1.1.1.1.
Multi-site connectivity
Connect multiple office networks:
Office A (10.0.1.0/24) Office B (10.0.2.0/24)
│ │
│ │
┌────▼────┐ ┌────▼────┐
│ Router A│───Tailscale───│ Router B│
└─────────┘ └─────────┘
│ │
Advertises Advertises
10.0.1.0/24 10.0.2.0/24
Both routers advertise their local subnets. Devices in Office A can reach Office B through the mesh.
Docker container networking
Connect containers to Tailscale using sidecar pattern:
services :
myapp :
image : myapp:latest
network_mode : "service:tailscale"
depends_on :
- tailscale
tailscale :
image : tailscale/tailscale:latest
hostname : myapp-container
environment :
- TS_AUTHKEY=YOUR_KEY
- TS_STATE_DIR=/var/lib/tailscale
- TS_LOGIN_SERVER=https://headscale.yourdomain.com
volumes :
- tailscale-data:/var/lib/tailscale
- /dev/net/tun:/dev/net/tun
cap_add :
- NET_ADMIN
- SYS_MODULE
restart : unless-stopped
The app container shares the Tailscale container’s network namespace.
Troubleshooting
Connection diagnostics
# Check Tailscale status
tailscale status
# Test network paths
tailscale netcheck
# View DERP latency
tailscale netcheck --verbose
# Debug connectivity
tailscale ping < node-i p >
Common issues
Nodes can't connect directly (always via DERP)
Cause : NAT traversal failingSolutions :
Enable UPnP/NAT-PMP on router
Configure port forwarding for UDP 41641
Check firewall allows UDP traffic
Verify STUN server is reachable
Subnet routes not working
Verify :
MagicDNS enabled in config.yaml
Client OS accepts Tailscale DNS settings
No DNS override in /etc/resolv.conf
Try direct IP instead of hostname
Routes guide Step-by-step subnet routing and exit node setup
DNS & MagicDNS Configure DNS resolution and custom records
Architecture Overall system architecture and components
Troubleshooting Diagnose and fix connectivity issues