Overview
Duckling provides automated local backups and optional S3 cloud backups with encryption. All backup features are enabled by default for zero-manual-intervention operations.Automatic Local Backups
Default Behavior
Local backups run automatically:- Backup Interval: Every 24 hours (
BACKUP_INTERVAL_HOURS=24) - Auto-Backup: Enabled by default (
AUTO_BACKUP=true) - Retention: 7 days (
BACKUP_RETENTION_DAYS=7) - Location:
data/backups/
Backups are skipped during server startup to speed up initialization. The first backup runs after 24 hours.
What Gets Backed Up
Each backup includes:- DuckDB database file (
.dbfile) - Metadata directory (configuration and state)
Configuration
| Variable | Default | Description |
|---|---|---|
AUTO_BACKUP | true | Enable automatic backups |
BACKUP_INTERVAL_HOURS | 24 | Hours between backups |
BACKUP_RETENTION_DAYS | 7 | Days to retain backups |
BACKUP_PATH | data/backups | Backup directory |
Disabling Automatic Backups
Manual Backup Operations
Trigger Manual Backup
API:Manual backups also upload to S3 if S3 is configured and enabled for the database.
Restore from Local Backup
API:S3 Cloud Backups
Why S3 Backups?
Production DuckDB replicas can reach 200 GB+ (or 500 GB+). S3 backups enable:- Fast disaster recovery (download pre-built
.dbfile) - Off-site storage (protection from server failures)
- Long-term retention (independent of local disk space)
- Encrypted storage (client-side or server-side encryption)
Restoring a 200 GB database from S3 takes minutes, compared to hours for a full MySQL resync.
S3 Configuration
S3 configuration is per-database and stored indatabases.json.
Configure via API:
S3 Configuration Schema
| Field | Type | Required | Description |
|---|---|---|---|
enabled | boolean | Yes | Enable S3 backups |
bucket | string | Yes | S3 bucket name |
region | string | Yes | AWS region (e.g., us-east-1) |
accessKeyId | string | Yes | AWS access key ID |
secretAccessKey | string | Yes | AWS secret access key (masked in API responses) |
endpoint | string | No | Custom endpoint for S3-compatible providers |
forcePathStyle | boolean | No | Use path-style URLs (for MinIO, etc.) |
pathPrefix | string | No | S3 key prefix (defaults to {database_id}/) |
encryption | string | No | Encryption mode: none, sse-s3, sse-kms, client-aes256 |
kmsKeyId | string | No | KMS key ARN (for sse-kms mode) |
encryptionKey | string | No | 64-char hex key (for client-aes256 mode) |
s3BackupIntervalHours | number | No | Hours between S3 backups (independent schedule) |
s3BackupRetentionDays | number | No | Days to retain S3 backups (auto-cleanup) |
S3-Compatible Providers
Duckling supports S3-compatible storage providers:| Provider | endpoint | forcePathStyle |
|---|---|---|
| AWS S3 | (leave blank) | false |
| Cloudflare R2 | https://<account_id>.r2.cloudflarestorage.com | false |
| Backblaze B2 | https://s3.<region>.backblazeb2.com | false |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com | false |
| MinIO (self-hosted) | https://minio.internal:9000 | true |
Encryption Options
Choose encryption based on your threat model:| Mode | Key Storage | Protects Against | Overhead |
|---|---|---|---|
none | — | Nothing | Zero |
sse-s3 | AWS managed | Physical media theft | Zero |
sse-kms | AWS KMS | Physical media theft + audit trail | ~1 ms/request |
client-aes256 | Your server (databases.json) | Compromised AWS credentials, bucket misconfiguration | Streaming (no memory spike) |
client-aes256 for production with sensitive data.
Generate Encryption Key
Client-Side Encryption Format
Backups use AES-256-CTR encryption:- Companion
.macfile storesHMAC-SHA256(key || IV || ciphertext) - Verified on restore to detect tampering
- Automatically managed (filtered from backup lists)
S3 Backup Operations
Automatic S3 Backups
When S3 is enabled:- Dual Schedule: S3 backups can run on an independent schedule from local backups
- After Local Backup: S3 upload also triggers after each local backup
- Auto-Cleanup: Old S3 backups deleted based on retention policy
Manual S3 Backup
List S3 Backups
Restore from S3
Test S3 Connection
Backup Strategy
Recommended Configuration
Local Backups:- Interval: 24 hours
- Retention: 7 days
- Purpose: Fast local recovery
- Interval: 24 hours (or 12 hours for critical databases)
- Retention: 30-90 days
- Encryption:
client-aes256 - Purpose: Disaster recovery, long-term retention
3-2-1 Backup Rule
Follow the 3-2-1 rule:- 3 copies: Production database + local backup + S3 backup
- 2 media types: Local disk + cloud storage
- 1 off-site: S3 in different region
For mission-critical databases, configure S3 with cross-region replication or use multiple S3 buckets in different regions.
Performance Considerations
Backup Performance
Local Backup:- Uses
CHECKPOINTto ensure consistent snapshot - File copy operation (very fast)
- Minimal impact on server performance
- Multipart upload (100 MB parts)
- Streaming encryption (no memory spike)
- Runs in background (non-blocking)
- 10 GB database: ~30 seconds local, ~2 minutes S3
- 100 GB database: ~5 minutes local, ~20 minutes S3
- 500 GB database: ~25 minutes local, ~100 minutes S3
Restore Performance
Local Restore:- File copy operation
- Very fast (seconds to minutes)
- Download time depends on bandwidth
- Streaming decryption
- Requires temporary disk space
- 10 GB database: ~1 minute
- 100 GB database: ~10 minutes
- 500 GB database: ~50 minutes (1 Gbps network)
S3 restore is still 10-20x faster than full MySQL resync for large databases.
Multi-Database Backups
Each database has independent backup configuration:Backup schedules are staggered across databases to prevent resource contention. Each database backs up at slightly different times.
Disaster Recovery
Recovery Scenarios
Scenario 1: Corrupted Database File
Scenario 2: Complete Server Loss
Scenario 3: Data Corruption in MySQL
Troubleshooting
Backup Failed
Check automation logs:- Insufficient disk space
- Backup directory permissions
- Sync in progress (backups skip if sync is running)
S3 Upload Failed
Test S3 connection:- Invalid AWS credentials
- Incorrect bucket name or region
- Network connectivity issues
- Insufficient S3 permissions (requires
s3:PutObject,s3:GetObject,s3:ListBucket)
Restore Failed
Check error message in API response. Common issues:- Insufficient disk space for temporary files
- Invalid encryption key
- Corrupted backup file
- HMAC verification failed (backup tampered or encryption key mismatch)
Next Steps
- Synchronization - Configure sync operations
- Monitoring - Set up health checks and metrics
- Performance Tuning - Optimize backup performance