The backup.sh script creates comprehensive backups of the Headscale stack, including the PostgreSQL database, configuration files, and Headscale data directory. It automatically cleans up old backups to manage disk space.
Location
Usage
The script runs non-interactively and creates timestamped backups in the ./backups/ directory.
What Gets Backed Up
The script creates three separate backup files:
1. PostgreSQL Database
File: backups/database_YYYYMMDD_HHMMSS.sql
Contains a complete dump of the Headscale PostgreSQL database, including:
- User accounts
- Node registrations
- Pre-auth keys
- Routes
- ACL policies
- All metadata
2. Configuration Files
File: backups/config_YYYYMMDD_HHMMSS.tar.gz
Compressed archive containing:
config/ directory (Headscale configuration)
.env file (environment variables)
3. Headscale Data
File: backups/data_YYYYMMDD_HHMMSS.tar.gz
Compressed archive of the data/ directory containing:
- Headscale server keys
- Internal state files
- SQLite database (if using SQLite instead of PostgreSQL)
Backup Process
The script executes the following steps:
# 1. Create backup directory
mkdir -p ./backups
# 2. Backup PostgreSQL database
docker exec headscale-db pg_dump -U headscale headscale > backups/database_TIMESTAMP.sql
# 3. Backup configuration files
tar -czf backups/config_TIMESTAMP.tar.gz config/ .env
# 4. Backup Headscale data directory
tar -czf backups/data_TIMESTAMP.tar.gz data/
# 5. Clean up old backups (older than 7 days)
find backups/ -name "*.sql" -mtime +7 -delete
find backups/ -name "*.tar.gz" -mtime +7 -delete
Output Example
./scripts/backup.sh
Starting backup...
Backing up PostgreSQL database...
Database backed up to: backups/database_20260304_120000.sql
Backing up configuration files...
Configuration backed up to: backups/config_20260304_120000.tar.gz
Backing up Headscale data...
Data backed up to: backups/data_20260304_120000.tar.gz
Backup completed successfully!
Backup location: backups
Cleaning up old backups...
Old backups cleaned up
Automatic Cleanup
The script automatically removes backups older than 7 days:
find "$BACKUP_DIR" -name "*.sql" -mtime +7 -delete
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +7 -delete
Adjust the retention period by changing +7 to a different number of days in the script.
Backup Directory Structure
After running the script, your backups/ directory will contain:
backups/
├── database_20260304_120000.sql
├── config_20260304_120000.tar.gz
├── data_20260304_120000.tar.gz
├── database_20260303_120000.sql
├── config_20260303_120000.tar.gz
└── data_20260303_120000.tar.gz
Restore Procedures
Restore PostgreSQL Database
# Stop the Headscale service
docker compose stop headscale
# Restore the database
cat backups/database_20260304_120000.sql | docker exec -i headscale-db psql -U headscale
# Restart services
docker compose start headscale
Restoring the database will overwrite all current data. Ensure you have a recent backup before proceeding.
Restore Configuration Files
# Stop services
docker compose down
# Extract configuration
tar -xzf backups/config_20260304_120000.tar.gz
# Restart services
docker compose up -d
Restore Headscale Data
# Stop Headscale
docker compose stop headscale
# Extract data
tar -xzf backups/data_20260304_120000.tar.gz
# Restart Headscale
docker compose start headscale
Complete Restore
For a full system restore:
# Stop all services
docker compose down
# Restore configuration
tar -xzf backups/config_20260304_120000.tar.gz
# Restore data
tar -xzf backups/data_20260304_120000.tar.gz
# Start services
docker compose up -d
# Wait for services to be ready
sleep 10
# Restore database
cat backups/database_20260304_120000.sql | docker exec -i headscale-db psql -U headscale
# Restart Headscale
docker compose restart headscale
Scheduling Automated Backups
Using Cron (Linux/macOS)
Add to your crontab:
# Edit crontab
crontab -e
# Add daily backup at 2 AM
0 2 * * * cd /path/to/headscale-stack && ./scripts/backup.sh >> /var/log/headscale-backup.log 2>&1
Example schedules:
# Daily at 2 AM
0 2 * * * cd /path/to/headscale && ./scripts/backup.sh
# Every 12 hours
0 */12 * * * cd /path/to/headscale && ./scripts/backup.sh
# Weekly on Sunday at 3 AM
0 3 * * 0 cd /path/to/headscale && ./scripts/backup.sh
Using systemd Timer (Linux)
Create /etc/systemd/system/headscale-backup.service:
[Unit]
Description=Headscale Backup
After=docker.service
[Service]
Type=oneshot
WorkingDirectory=/path/to/headscale-stack
ExecStart=/path/to/headscale-stack/scripts/backup.sh
User=youruser
Create /etc/systemd/system/headscale-backup.timer:
[Unit]
Description=Daily Headscale Backup
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
Enable and start the timer:
sudo systemctl daemon-reload
sudo systemctl enable headscale-backup.timer
sudo systemctl start headscale-backup.timer
# Check timer status
sudo systemctl list-timers --all | grep headscale
Best Practices
Retention Policy
The default 7-day retention is suitable for development. For production:
- Daily backups: Keep 30 days
- Weekly backups: Keep 12 weeks (3 months)
- Monthly backups: Keep 12 months
Example multi-tier backup script:
# Daily backups (keep 30 days)
find backups/daily/ -name "*.sql" -mtime +30 -delete
# Weekly backups (keep 90 days)
find backups/weekly/ -name "*.sql" -mtime +90 -delete
# Monthly backups (keep 365 days)
find backups/monthly/ -name "*.sql" -mtime +365 -delete
Off-Site Storage
For production environments, copy backups to remote storage:
Using rclone:
#!/bin/bash
# Run backup
./scripts/backup.sh
# Copy to remote storage
rclone copy backups/ remote:headscale-backups/
Using rsync:
#!/bin/bash
./scripts/backup.sh
rsync -avz backups/ user@backup-server:/backups/headscale/
Using S3:
#!/bin/bash
./scripts/backup.sh
aws s3 sync backups/ s3://my-bucket/headscale-backups/
Testing Backups
Regularly verify backups are restorable:
# Test database backup integrity
cat backups/database_20260304_120000.sql | docker exec -i headscale-db psql -U headscale -f - > /dev/null
# Test archive integrity
tar -tzf backups/config_20260304_120000.tar.gz > /dev/null
tar -tzf backups/data_20260304_120000.tar.gz > /dev/null
Monitoring Backup Success
Create a monitoring script:
#!/bin/bash
# Check if backup ran in last 25 hours
find backups/ -name "database_*.sql" -mtime -1 | grep -q .
if [ $? -ne 0 ]; then
echo "ERROR: No recent database backup found!"
# Send alert (email, Slack, etc.)
fi
Disk Space Considerations
Backup sizes depend on your deployment:
- Database: Typically 1-100 MB depending on nodes and history
- Configuration: < 1 MB
- Data directory: Varies based on key material and state
Check backup sizes:
du -sh backups/
du -h backups/* | sort -h
Ensure sufficient disk space for backups. Monitor disk usage regularly, especially before retention period expires.
Troubleshooting
Database Container Not Running
Error:
Error: No such container: headscale-db
Solution:
# Check container status
docker compose ps
# Start database if stopped
docker compose start headscale-db
Permission Denied
Error:
tar: cannot open: Permission denied
Solution:
# Make script executable
chmod +x scripts/backup.sh
# Ensure backup directory is writable
mkdir -p backups
chmod 755 backups
Data Directory Missing
Warning:
tar: data/: Cannot stat: No such file or directory
Explanation:
The data directory may not exist if using a fresh installation or different storage backend. The script checks for the directory before attempting backup:
if [ -d "data" ]; then
tar -czf "$BACKUP_DIR/data_${TIMESTAMP}.tar.gz" data/
fi
SQLite vs PostgreSQL
This script is designed for the PostgreSQL deployment. If using SQLite, the database is stored in data/db.sqlite and backed up via the data directory backup.
For SQLite-only deployments, modify the script to skip PostgreSQL backup:
# Remove or comment out PostgreSQL backup section
# docker exec headscale-db pg_dump -U headscale headscale > ...
Security Considerations
Backup Encryption
For sensitive environments, encrypt backups:
# Encrypt with GPG
tar -czf - config/ .env | gpg --symmetric --cipher-algo AES256 > backup.tar.gz.gpg
# Decrypt
gpg --decrypt backup.tar.gz.gpg | tar -xzf -
Access Control
Protect backup directory:
chmod 700 backups/
chown root:root backups/
Environment Variables
Backups include .env which may contain sensitive credentials. Ensure:
- Backup directory has restrictive permissions
- Off-site backups are encrypted
- Access logs are monitored
Manual database backup:
docker exec headscale-db pg_dump -U headscale headscale > manual-backup.sql
Check database size:
docker exec headscale-db psql -U headscale -c "SELECT pg_size_pretty(pg_database_size('headscale'));"
List all backups:
Remove old backups manually:
find backups/ -name "*.sql" -mtime +30 -delete