Overview
Persistent storage is critical for stateful applications and databases. Dokploy uses Docker volumes to provide reliable, performant storage that persists across container restarts and redeployments.
Data Persistence Data survives container restarts and updates
Performance Native filesystem performance
Portability Easy backup, restore, and migration
Volume Types
Docker-managed volumes: volumes :
app_data :
driver : local
services :
app :
volumes :
- app_data:/app/data
Benefits :
Managed by Docker
Automatic creation
Easy to backup
Independent lifecycle
Direct host filesystem mounts: services :
app :
volumes :
- ./data:/app/data
- /host/path:/container/path
Use cases :
Development (live code reload)
Existing host data
Configuration files
In-memory storage: services :
app :
tmpfs :
- /app/cache
Use cases :
Temporary cache
Sensitive data (not persisted)
High I/O workloads
Network storage: volumes :
nfs_data :
driver : local
driver_opts :
type : nfs
o : addr=nfs-server.example.com,rw
device : ":/path/to/share"
Use cases :
Shared storage across nodes
Centralized backups
Legacy infrastructure
Creating and Managing Volumes
Via Docker Compose
version : "3.8"
services :
postgres :
image : postgres:16
volumes :
- postgres_data:/var/lib/postgresql/data
environment :
- POSTGRES_PASSWORD=${DB_PASSWORD}
app :
image : myapp:latest
volumes :
- app_uploads:/app/uploads
- app_cache:/app/cache
volumes :
postgres_data :
driver : local
app_uploads :
driver : local
app_cache :
driver : local
Via Docker CLI
# Create volume
docker volume create my_volume
# List volumes
docker volume ls
# Inspect volume
docker volume inspect my_volume
# Remove volume
docker volume rm my_volume
# Remove unused volumes
docker volume prune
Via Dokploy API
curl -X POST https://your-dokploy-instance.com/api/mounts.create \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"applicationId": "app-id",
"type": "volume",
"volumeName": "app_data",
"mountPath": "/app/data"
}'
Volume Configuration Options
Volume Labels
Organize volumes with labels:
volumes :
app_data :
driver : local
labels :
com.dokploy.project : myproject
com.dokploy.environment : production
com.dokploy.backup : enabled
Volume Driver Options
Customize volume behavior:
volumes :
app_data :
driver : local
driver_opts :
type : none
o : bind
device : /mnt/data/app
Read-Only Volumes
Mount volumes as read-only:
services :
app :
volumes :
- app_config:/app/config:ro
Database Storage
Each database type has specific volume requirements:
PostgreSQL
services :
postgres :
image : postgres:16
volumes :
- postgres_data:/var/lib/postgresql/data
environment :
- POSTGRES_PASSWORD=${DB_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes :
postgres_data :
driver : local
MySQL/MariaDB
services :
mysql :
image : mysql:8
volumes :
- mysql_data:/var/lib/mysql
environment :
- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
volumes :
mysql_data :
driver : local
MongoDB
services :
mongo :
image : mongo:7
volumes :
- mongo_data:/data/db
- mongo_config:/data/configdb
environment :
- MONGO_INITDB_ROOT_PASSWORD=${DB_PASSWORD}
volumes :
mongo_data :
driver : local
mongo_config :
driver : local
Redis
services :
redis :
image : redis:7
volumes :
- redis_data:/data
command : redis-server --appendonly yes
volumes :
redis_data :
driver : local
Volume Backup and Restore
Backup a Volume
# Backup to tar archive
docker run --rm \
-v volume_name:/source:ro \
-v $( pwd ) :/backup \
alpine tar czf /backup/volume_backup.tar.gz -C /source .
Restore a Volume
# Restore from tar archive
docker run --rm \
-v volume_name:/target \
-v $( pwd ) :/backup \
alpine sh -c "cd /target && tar xzf /backup/volume_backup.tar.gz"
Automated Backups via Dokploy
Configure automated volume backups:
curl -X POST https://your-dokploy-instance.com/api/volumeBackups.create \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"volumeName": "app_data",
"schedule": "0 2 * * *",
"destinationId": "s3-destination-id",
"retentionDays": 30
}'
Volume Migration
Between Servers
Backup on Source Server
docker run --rm \
-v postgres_data:/source:ro \
-v $( pwd ) :/backup \
alpine tar czf /backup/postgres_backup.tar.gz -C /source .
Transfer to Target Server
scp postgres_backup.tar.gz user@target-server:/tmp/
Create Volume on Target
docker volume create postgres_data
Restore on Target Server
docker run --rm \
-v postgres_data:/target \
-v /tmp:/backup \
alpine sh -c "cd /target && tar xzf /backup/postgres_backup.tar.gz"
Between Volume Drivers
Migrate from local to NFS:
# Create new NFS volume
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=nfs-server,rw \
--opt device=:/path/to/share \
nfs_volume
# Copy data
docker run --rm \
-v old_volume:/source:ro \
-v nfs_volume:/target \
alpine cp -a /source/. /target/
Storage Drivers
Docker supports multiple storage drivers:
overlay2 (Default)
btrfs
zfs
devicemapper
Modern, performant storage driver:
Best performance
Production-ready
Supports Linux 4.0+
Copy-on-write
Recommended for most use cases. B-tree filesystem:
Native copy-on-write
Snapshots
Compression
Requires btrfs filesystem
ZFS filesystem:
Data integrity
Compression
Snapshots
Requires ZFS kernel module
Legacy driver:
Block-level storage
Not recommended for new deployments
Use overlay2 instead
Check Current Driver
docker info | grep "Storage Driver"
Use Named Volumes Instead of Bind Mounts
Named volumes are faster and more portable: # Fast
volumes :
- app_data:/app/data
# Slower (on macOS/Windows)
volumes :
- ./data:/app/data
Enable Volume Caching (macOS/Windows)
Improve bind mount performance: volumes :
- ./code:/app:cached # Host authoritative
- ./logs:/logs:delegated # Container authoritative
Use tmpfs for Temporary Data
Store temporary data in memory: services :
app :
tmpfs :
- /app/cache:size=1G,mode=1777
Optimize Database Volume Configuration
PostgreSQL example: services :
postgres :
volumes :
- postgres_data:/var/lib/postgresql/data
environment :
- PGDATA=/var/lib/postgresql/data/pgdata
# Use specific subdirectory for better performance
Storage Monitoring
Check Volume Sizes
# List volume sizes
docker system df -v
# Inspect specific volume
docker volume inspect volume_name
Monitor Disk Usage
# Container disk usage
docker ps -s
# Volume usage
df -h /var/lib/docker/volumes/
Cleanup Unused Volumes
# Remove unused volumes
docker volume prune
# Remove all unused data
docker system prune -a --volumes
docker system prune removes ALL unused data including volumes. Use with caution!
Multi-Node Storage
For multi-node deployments, use shared storage:
NFS Configuration
volumes :
shared_data :
driver : local
driver_opts :
type : nfs
o : addr=nfs-server.example.com,rw,nfsvers=4
device : ":/exports/shared"
services :
app :
volumes :
- shared_data:/app/data
deploy :
replicas : 3 # All replicas share same data
Volume Placement
Pin volumes to specific nodes:
services :
postgres :
volumes :
- postgres_data:/var/lib/postgresql/data
deploy :
placement :
constraints :
- node.labels.storage == postgres
Best Practices
Use Named Volumes Prefer named volumes over bind mounts for portability
Backup Regularly Automate volume backups to external storage
Monitor Disk Space Set up alerts for disk usage thresholds
Document Mount Points Maintain documentation of all volume mounts
Troubleshooting
Fix ownership issues: docker run --rm \
-v volume_name:/data \
alpine chown -R 1000:1000 /data
Volume full / out of space
Check disk usage: df -h
Remove unused volumes: docker volume prune
Increase disk space
Configure volume size limits
Volume data not persisting
Verify volume is defined in compose file
Check mount path is correct
Ensure volume isn’t being recreated on each deploy
Inspect volume: docker volume inspect volume_name
Verify NFS server is accessible
Check firewall rules
Test mount manually: mount -t nfs server:/path /mnt
Review NFS server exports: /etc/exports
Next Steps
Backups Configure automated volume backups
Databases Database-specific storage configuration
Multi-Node Shared storage in multi-node deployments
Monitoring Monitor storage usage and health