Skip to main content

Overview

Dokploy provides comprehensive backup capabilities for databases, volumes, and complete application stacks. Automate backups to external storage destinations and ensure business continuity with proven disaster recovery strategies.

Automated Backups

Schedule automatic backups with cron expressions

Multiple Destinations

S3-compatible storage, local, and remote servers

Easy Restore

Quick restoration from any backup point

Backup Types

Dokploy supports different backup strategies:
Automated database dumps:
  • PostgreSQL: pg_dump
  • MySQL/MariaDB: mysqldump
  • MongoDB: mongodump
  • Redis: BGSAVE and RDB snapshots
Recommended frequency: Daily or hourly for critical data

Configuring Database Backups

Via Dashboard

1

Navigate to Database

Go to your database (PostgreSQL, MySQL, MongoDB, MariaDB, or Redis)
2

Open Backup Settings

Click on the Backups tab
3

Configure Schedule

enabled
boolean
required
Enable automated backups
schedule
string
required
Cron expression for backup frequencyExamples:
  • 0 2 * * * - Daily at 2 AM
  • 0 */6 * * * - Every 6 hours
  • 0 0 * * 0 - Weekly on Sunday
destinationId
string
required
Select backup destination (S3, local, etc.)
retentionDays
number
default:"30"
Number of days to keep backups
4

Test Backup

Click Run Backup Now to test the configuration

Via API

Create Backup Configuration
curl -X POST https://your-dokploy-instance.com/api/backup.create \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "databaseType": "postgres",
    "databaseId": "postgres-id",
    "schedule": "0 2 * * *",
    "destinationId": "s3-destination-id",
    "retentionDays": 30
  }'
Trigger Manual Backup
curl -X POST https://your-dokploy-instance.com/api/backup.run \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "backupId": "backup-id"
  }'

Backup Destinations

S3-Compatible Storage

Supports AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, and more:
1

Add Destination

Navigate to SettingsBackup DestinationsAdd Destination
2

Configure S3

name
string
required
Friendly name for the destination
type
string
required
Select “S3”
endpoint
string
required
S3 endpoint URL (e.g., s3.amazonaws.com)
region
string
required
AWS region (e.g., us-east-1)
bucket
string
required
Bucket name
accessKeyId
string
required
AWS Access Key ID
secretAccessKey
string
required
AWS Secret Access Key
3

Test Connection

Click Test Connection to verify credentials

Provider Examples

{
  "endpoint": "https://s3.amazonaws.com",
  "region": "us-east-1",
  "bucket": "dokploy-backups",
  "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
  "secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
{
  "endpoint": "https://nyc3.digitaloceanspaces.com",
  "region": "nyc3",
  "bucket": "my-backups",
  "accessKeyId": "DO00...",
  "secretAccessKey": "..."
}
{
  "endpoint": "https://minio.example.com",
  "region": "us-east-1",
  "bucket": "backups",
  "accessKeyId": "minioadmin",
  "secretAccessKey": "minioadmin"
}

Local Storage

Store backups on the Dokploy server:
type: local
path: /var/dokploy/backups
Local backups don’t protect against server failure. Always use remote storage for production.

Remote Server (SFTP)

Backup to a remote server via SFTP:
{
  "type": "sftp",
  "host": "backup-server.example.com",
  "port": 22,
  "username": "backup-user",
  "privateKey": "...",
  "path": "/backups/dokploy"
}

Database-Specific Backup Methods

PostgreSQL

Manual Backup
# Full database dump
docker exec postgres pg_dump -U postgres database_name > backup.sql

# Compressed backup
docker exec postgres pg_dump -U postgres -Fc database_name > backup.dump

# All databases
docker exec postgres pg_dumpall -U postgres > all_databases.sql
# From SQL dump
docker exec -i postgres psql -U postgres database_name < backup.sql

# From compressed dump
docker exec -i postgres pg_restore -U postgres -d database_name backup.dump

MySQL/MariaDB

Manual Backup
# Single database
docker exec mysql mysqldump -u root -p database_name > backup.sql

# All databases
docker exec mysql mysqldump -u root -p --all-databases > all_databases.sql

# With compression
docker exec mysql mysqldump -u root -p database_name | gzip > backup.sql.gz
# From SQL dump
docker exec -i mysql mysql -u root -p database_name < backup.sql

# From compressed dump
gunzip < backup.sql.gz | docker exec -i mysql mysql -u root -p database_name

MongoDB

Manual Backup
# Dump database
docker exec mongo mongodump --username root --password pass --authenticationDatabase admin --out /backup

# Compressed archive
docker exec mongo mongodump --username root --password pass --authenticationDatabase admin --archive=/backup/backup.archive --gzip
# Restore from dump
docker exec mongo mongorestore --username root --password pass --authenticationDatabase admin /backup

# From archive
docker exec mongo mongorestore --username root --password pass --authenticationDatabase admin --archive=/backup/backup.archive --gzip

Redis

Manual Backup
# Trigger background save
docker exec redis redis-cli BGSAVE

# Copy RDB file
docker cp redis:/data/dump.rdb ./redis_backup.rdb
# Stop Redis
docker stop redis

# Replace RDB file
docker cp redis_backup.rdb redis:/data/dump.rdb

# Start Redis
docker start redis

Volume Backups

Backup Docker Volumes

Manual Volume Backup
# Create volume backup
docker run --rm \
  -v volume_name:/source:ro \
  -v $(pwd):/backup \
  alpine tar czf /backup/volume_backup.tar.gz -C /source .

Via Dokploy API

Create Volume Backup
curl -X POST https://your-dokploy-instance.com/api/volumeBackups.create \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "volumeName": "app_data",
    "destinationId": "s3-destination-id",
    "schedule": "0 3 * * *"
  }'

Backup Retention Policies

Configure how long backups are kept:
Keep backups for a specific duration:
{
  "retentionDays": 30,
  "retentionWeeks": 12,
  "retentionMonths": 12
}

Restoring from Backups

Database Restore

1

Access Backup

Download backup from storage destination
2

Stop Application

Temporarily stop applications using the database
3

Restore Data

Use database-specific restore commands (see above)
4

Verify

Check data integrity and application functionality
5

Resume

Start applications

Via Dokploy Dashboard

  1. Navigate to DatabaseBackups
  2. Find the backup to restore
  3. Click Restore
  4. Confirm the action
  5. Monitor restoration progress

Disaster Recovery

Complete System Recovery

1

Reinstall Dokploy

Install Dokploy on a new server:
curl -sSL https://dokploy.com/install.sh | sh
2

Restore Configuration

Import your projects, applications, and environment variables
3

Restore Databases

Restore database backups from external storage
4

Restore Volumes

Restore volume backups for application data
5

Deploy Applications

Redeploy all applications
6

Update DNS

Point domains to the new server

Recovery Time Objective (RTO)

Plan for acceptable downtime:
  • Critical systems: < 1 hour
  • Production systems: < 4 hours
  • Development systems: < 24 hours

Recovery Point Objective (RPO)

Acceptable data loss:
  • Financial data: < 1 hour (hourly backups)
  • User data: < 24 hours (daily backups)
  • Logs/metrics: < 1 week (weekly backups)

Backup Best Practices

  • 3 copies of your data
  • 2 different storage types (e.g., local + cloud)
  • 1 off-site backup
  • Schedule quarterly restore tests
  • Verify backup integrity
  • Document restore procedures
  • Time the restore process
# Encrypt before upload
gpg --encrypt --recipient [email protected] backup.sql

# Decrypt for restore
gpg --decrypt backup.sql.gpg > backup.sql
  • Set up alerts for failed backups
  • Review backup logs weekly
  • Track backup size growth
  • Monitor storage costs
Maintain documentation for:
  • Backup schedules
  • Storage credentials
  • Restore procedures
  • Contact information
  • RTO/RPO targets

Monitoring Backups

Track backup health:
Get Backup Status
curl https://your-dokploy-instance.com/api/backup.all \
  -H "Authorization: Bearer YOUR_API_KEY"
Response includes:
  • Last backup time
  • Backup size
  • Success/failure status
  • Next scheduled backup
  • Storage destination

Troubleshooting

  • Verify storage credentials
  • Check S3 bucket permissions
  • Ensure IAM policy allows PutObject
  • Test credentials manually with AWS CLI
  • Consider incremental backups
  • Compress before upload
  • Use faster storage destination
  • Schedule during low-traffic hours
  • Check database size and optimize
  • Adjust retention policies
  • Enable compression
  • Archive old backups to cheaper storage
  • Review backup frequency
  • Verify backup file integrity
  • Check database version compatibility
  • Ensure sufficient disk space
  • Review restore logs for errors

Next Steps

Database Backups

Detailed database backup configuration

Volumes & Storage

Manage persistent volumes

Monitoring

Monitor backup job status

Security

Secure your backups

Build docs developers (and LLMs) love