Skip to main content

Backup Strategy Overview

Homelab v3 implements a five-tier backup strategy designed to protect against different failure scenarios — from accidental deletions to complete site loss.

Multi-Tier Backup Architecture

TierWhatToolDestinationRecovery Speed
Tier 0VM/LXC snapshotsProxmox Backup Server (PBS)pbs-prod-01 VM → ZFS mirror share on NASMinutes
Tier 1Docker appdata + stacksHardened rsync script + HealthchecksNAS /backups share (ZFS mirror pool)Minutes
Tier 1Plex databaseDedicated backup scriptNAS /backups/plex/dbMinutes
Tier 2NAS share snapshotsUnraid ZFS snapshotsLocal ZFS snapshots on NASSeconds
Tier 3Off-box cold copySynology ABB (pull-based)Synology NAS (SkyHawks) — nightlyHours
Tier 4Cloud backupBackblaze B2Immich photos, critical backups offsiteDays
PBS backs up VM disk images only — it does NOT back up application data inside VMs. Application-level backups (Docker appdata, Plex DB) remain essential and run independently of PBS.

Tier 0: Proxmox Backup Server

PBS Configuration

PBS runs as a VM (pbs-prod-01) on pve-prod-02 (Optiplex) and backs up all VMs and LXCs on both Proxmox nodes.
1

Access PBS Web Interface

Navigate to https://192.168.30.12:8007Login with root credentials or Proxmox-linked account.
2

Configure Datastore

Datastore → Add Datastore
  • Name: backup-storage
  • Backing Path: /mnt/backups (NFS mount from NAS ZFS mirror pool)
  • GC Schedule: daily at 02:00
  • Prune Schedule: Keep last 7 daily, 4 weekly, 3 monthly
3

Add Proxmox Nodes as Backup Sources

From each Proxmox node UI:Datacenter → Storage → Add → Proxmox Backup Server
  • ID: pbs-prod-01
  • Server: 192.168.30.12
  • Username: root@pam
  • Datastore: backup-storage
  • Fingerprint: (copy from PBS dashboard)
4

Create Backup Jobs

On each Proxmox node:Datacenter → Backup → Add
  • Storage: pbs-prod-01
  • Schedule: daily at 01:00
  • Selection Mode: All
  • Compression: zstd
  • Mode: Snapshot
  • Enable: ✓

Restoring from PBS

1

Browse Available Backups

In Proxmox UI:Datacenter → Storage → pbs-prod-01 → ContentFind the VM/LXC backup you need to restore.
2

Restore VM or LXC

Right-click the backup → Restore
  • VM ID: (new ID or overwrite existing)
  • Storage: Select target storage
  • Start after restore: ✓ (optional)
3

Verify Restored System

After restore completes:
  • Check network configuration (static IPs may need adjustment)
  • Verify NFS mounts reconnected (df -h)
  • Check application-specific data (see Tier 1 restore)
Network Configuration: Restored VMs retain their original IP addresses. If restoring to a different node or as a clone, update /etc/network/interfaces or netplan config before starting.

Tier 1: Application Data Backups

Docker Appdata Backup

Runs on docker-prod-01 via hardened rsync script with safety checks. Location: /opt/scripts/backup-appdata.sh Key Safety Features:
  • Mountpoint validation (fails if NFS unmounted)
  • Exclusive lockfile (prevents overlapping runs)
  • Healthchecks.io heartbeat monitoring
  • Pre-flight disk space check
1

Manual Backup Execution

SSH to docker-prod-01:
sudo /opt/scripts/backup-appdata.sh
Script will:
  • Verify /data is mounted
  • Acquire lockfile
  • Rsync /opt/appdata → NAS /backups/docker/appdata
  • Rsync /opt/stacks → NAS /backups/docker/stacks
  • Ping Healthchecks.io on success
2

Verify Backup Completed

Check Healthchecks.io dashboard:https://healthchecks.iodocker-appdata-backupShould show green checkmark with recent timestamp.

Restoring Docker Appdata

1

Stop All Containers

From docker-prod-01:
cd /opt/stacks
docker compose -f arr-stack/compose.yaml down
docker compose -f infra-stack/compose.yaml down
docker compose -f torrent-stack/compose.yaml down
docker compose -f books-stack/compose.yaml down
2

Restore Appdata from Backup

# Backup current state (just in case)
sudo mv /opt/appdata /opt/appdata.old

# Restore from NAS
sudo rsync -avH /data/backups/docker/appdata/ /opt/appdata/
sudo rsync -avH /data/backups/docker/stacks/ /opt/stacks/

# Fix ownership
sudo chown -R 2000:2000 /opt/appdata
sudo chown -R gio:gio /opt/stacks
3

Restart Containers

cd /opt/stacks
docker compose -f infra-stack/compose.yaml up -d
docker compose -f arr-stack/compose.yaml up -d
docker compose -f torrent-stack/compose.yaml up -d
docker compose -f books-stack/compose.yaml up -d
4

Verify Services

Check Homarr dashboard or Dockman:
  • All containers running
  • No permission errors in logs
  • Services accessible via Traefik

Tier 1: Plex Database Backup

Dedicated backup script runs on nas-prod-01 (Unraid) since Plex runs natively there. Location: /boot/config/plugins/user.scripts/scripts/backup-plex-db/script Schedule: Daily at 03:00 via Unraid User Scripts plugin
1

Manual Plex Backup

From Unraid terminal or SSH:
/boot/config/plugins/user.scripts/scripts/backup-plex-db/script
Script performs:
  • Stop Plex container gracefully
  • Rsync /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases//mnt/user/backups/plex/db/
  • Restart Plex container
  • Uses EXIT trap to ensure Plex restarts even on failure
2

Verify Backup

Check backup directory:
ls -lh /mnt/user/backups/plex/db/
Should contain recent com.plexapp.plugins.library.db and related files.

Restoring Plex Database

Stop Plex first: Never restore Plex database while Plex is running. Corruption risk is high.
1

Stop Plex Container

From Unraid UI or terminal:
docker stop plex
2

Backup Current Database

mv "/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases" \
   "/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases.old"
3

Restore from Backup

rsync -avH /mnt/user/backups/plex/db/ \
  "/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases/"
4

Restart Plex

docker start plex
Plex will detect the restored database and rebuild indexes if needed.
5

Verify Library

Access Plex web UI → verify libraries are intact and playback works.

Tier 2: ZFS Snapshots

Unraid automatically snapshots ZFS mirror pool (backups and photos shares). Purpose: Instant recovery from accidental deletion or corruption within the last 7 days.

Viewing Snapshots

From Unraid terminal:
# List all snapshots for backups pool
zfs list -t snapshot -r backups

# List all snapshots for photos pool
zfs list -t snapshot -r photos

Restoring from ZFS Snapshot

1

Identify Target Snapshot

zfs list -t snapshot -r backups | grep "2026-03-01"
Note the full snapshot name, e.g., backups@auto-2026-03-01_0000
2

Restore Specific File or Directory

Snapshots are accessible at .zfs/snapshot/ in the dataset root:
# Browse snapshot
ls -la /mnt/backups/.zfs/snapshot/auto-2026-03-01_0000/docker/appdata/

# Copy specific file back
cp /mnt/backups/.zfs/snapshot/auto-2026-03-01_0000/docker/appdata/sonarr/config.xml \
   /mnt/backups/docker/appdata/sonarr/
3

Rollback Entire Dataset (Destructive)

This destroys all changes made after the snapshot. Only use if full rollback is required.
zfs rollback backups@auto-2026-03-01_0000

Tier 3: Off-Site Cold Copy (Synology)

Synology Active Backup for Business (ABB) runs on separate Synology NAS and pulls backups nightly. Pull-based: NAS credentials on Synology are read-only. If Synology is compromised, attacker cannot modify or delete primary backups.

Verifying Synology Backups

1

Access Synology ABB Dashboard

Login to Synology DSM → Active Backup for Business
2

Check Backup Status

Verify:
  • Last successful backup timestamp (should be within 24 hours)
  • Data transferred size
  • No errors in activity log
3

Test Restore (Quarterly)

Perform a test restore of a small directory to verify backup integrity:
  • Select backup task → Restore
  • Choose specific files or folders
  • Restore to test location
  • Verify contents match expected state

Tier 4: Cloud Backup (Backblaze B2)

Status: Future implementation Target Data:
  • Immich photo library (/data/photos)
  • Critical backup metadata
  • Plex database backups
Tool: rclone with encryption Schedule: Weekly, overnight during low-bandwidth periods

Backup Monitoring

All automated backups report to Healthchecks.io for heartbeat monitoring. Healthchecks Dashboard: https://healthchecks.io Monitored Jobs:
  • docker-appdata-backup — Daily 04:00
  • plex-db-backup — Daily 03:00
  • pbs-backup-pve-prod-01 — Daily 01:00
  • pbs-backup-pve-prod-02 — Daily 01:30

Setting Up New Backup Monitoring

1

Create Healthcheck

Healthchecks.io dashboard → Add Check
  • Name: service-name-backup
  • Period: 1 day (or appropriate interval)
  • Grace: 1 hour
2

Copy Ping URL

Copy the unique ping URL, e.g., https://hc-ping.com/abc123...
3

Add to Backup Script

At the end of your backup script, on success:
# Ping Healthchecks on success
curl -fsS --retry 3 https://hc-ping.com/abc123... > /dev/null
4

Test

Run backup script manually and verify green checkmark appears in Healthchecks dashboard.

Disaster Recovery Scenarios

Scenario 1: Accidental File Deletion

Recovery Time: Seconds to minutes Steps:
  1. Check ZFS snapshots first (Tier 2) — fastest recovery
  2. If older than snapshot retention, restore from rsync backup (Tier 1)

Scenario 2: Docker VM Corruption

Recovery Time: 30-60 minutes Steps:
  1. Restore VM from PBS (Tier 0)
  2. Restore appdata from rsync backup (Tier 1)
  3. Restart containers

Scenario 3: NAS Failure

Recovery Time: 4-8 hours Steps:
  1. Replace failed hardware
  2. Rebuild Unraid array
  3. Restore from Synology ABB (Tier 3)
  4. Rebuild VM/LXC from PBS (Tier 0 backups survive on separate VM)

Scenario 4: Complete Site Loss (Fire/Flood)

Recovery Time: Days to weeks Steps:
  1. Restore photos and critical data from Backblaze B2 (Tier 4)
  2. Rebuild infrastructure from IaC repository
  3. Media library must be re-downloaded (acceptable loss)
  4. Plex database restored from B2 retains watch history and metadata

Build docs developers (and LLMs) love