Skip to main content

Overview

Docker deployment provides a production-ready PhotoFlow setup with:
  • Containerized services - Application and database isolated in containers
  • Automated setup - Database migrations run automatically on startup
  • Persistent data - Volumes ensure data survives container restarts
  • Easy updates - Rebuild and restart to update
  • Consistent environment - Same setup across dev, staging, and production
Docker is the recommended deployment method for production use.

Prerequisites

Before deploying:
  • Docker Engine 20.10+ installed (installation guide)
  • Docker Compose 2.0+ installed
  • Basic understanding of Docker concepts
  • Access to the server where you’ll deploy

Quick Start

Get PhotoFlow running with Docker in minutes:
1

Clone Repository

git clone https://github.com/DomenicWalther/PhotoFlow.git
cd PhotoFlow
2

Review Configuration

Check docker-compose.yml settings. The defaults work for most cases.
3

Start Services

docker compose up -d
This builds and starts both containers in the background.
4

Verify Deployment

docker compose ps
Both containers should show status “Up”.
5

Access PhotoFlow

Open your browser to:
http://localhost:3000

Understanding the Setup

PhotoFlow’s Docker deployment uses two services:

PostgreSQL Container

Handles all data storage:
postgres:
  image: postgres:15-alpine
  container_name: photoflow_postgres
  environment:
    POSTGRES_USER: photoflow
    POSTGRES_PASSWORD: photoflow_password
    POSTGRES_DB: photoflow
  ports:
    - "5432:5432"
  volumes:
    - postgres_data:/var/lib/postgresql/data
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U photoflow -d photoflow"]
    interval: 5s
    timeout: 5s
    retries: 5
Key features:
  • Uses Alpine Linux for smaller image size
  • Health check ensures database is ready before app starts
  • Named volume persists data across container restarts
  • Port 5432 exposed for backup/management tools

Application Container

Runs the PhotoFlow application:
app:
  build:
    context: .
    dockerfile: Dockerfile
  container_name: photoflow_app
  ports:
    - "3000:3000"
  environment:
    - DATABASE_URL=postgresql://photoflow:photoflow_password@postgres:5432/photoflow
    - NODE_ENV=production
    - VITE_SOCKET_URL=http://localhost:3000
  depends_on:
    postgres:
      condition: service_healthy
  restart: unless-stopped
Key features:
  • Waits for PostgreSQL health check before starting
  • Restarts automatically if it crashes
  • Uses service name “postgres” to connect to database
  • Port 3000 mapped to host
See Docker Deployment for detailed implementation information.

Production Configuration

For production use, customize the configuration:

Change Default Passwords

ALWAYS change default passwords in production!
Create a .env file:
.env
POSTGRES_USER=photoflow
POSTGRES_PASSWORD=your_very_secure_password_here
POSTGRES_DB=photoflow
VITE_SOCKET_URL=http://your-server-ip:3000
Update docker-compose.yml to use these variables:
services:
  postgres:
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}

  app:
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      - NODE_ENV=production
      - VITE_SOCKET_URL=${VITE_SOCKET_URL}

Configure Network Access

For multi-PC access:
  1. Find server IP:
ip addr show  # Linux
ipconfig      # Windows
  1. Update VITE_SOCKET_URL:
.env
VITE_SOCKET_URL=http://192.168.1.100:3000
  1. Rebuild containers:
docker compose down
docker compose up -d --build

Set Resource Limits

Prevent containers from consuming all system resources:
services:
  postgres:
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G
        reservations:
          cpus: '0.5'
          memory: 512M

  app:
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 2G
        reservations:
          cpus: '1.0'
          memory: 1G

Managing the Deployment

View Logs

Monitor application output:
# All services
docker compose logs -f

# Just the app
docker compose logs -f app

# Just the database
docker compose logs -f postgres

# Last 50 lines
docker compose logs --tail=50 app

Restart Services

# Restart all
docker compose restart

# Restart just the app
docker compose restart app

Stop Services

# Stop containers (keeps data)
docker compose stop

# Stop and remove containers (keeps data in volumes)
docker compose down

# Stop and DELETE ALL DATA
docker compose down -v  # Careful!

Update PhotoFlow

To deploy a new version:
# Pull latest code
git pull origin main

# Rebuild and restart
docker compose up -d --build
The build process:
  1. Pulls updated code
  2. Rebuilds the application image
  3. Restarts the container
  4. Runs database migrations automatically
  5. Starts the application
Database data persists through updates via named volumes.

Database Management

Backup Database

Create backups regularly:
# Create backup
docker compose exec postgres pg_dump -U photoflow photoflow > backup_$(date +%Y%m%d).sql

# Compressed backup
docker compose exec postgres pg_dump -U photoflow photoflow | gzip > backup_$(date +%Y%m%d).sql.gz

Restore Database

# From SQL file
cat backup_20260301.sql | docker compose exec -T postgres psql -U photoflow photoflow

# From compressed file
gunzip -c backup_20260301.sql.gz | docker compose exec -T postgres psql -U photoflow photoflow

Access Database Shell

For direct database access:
docker compose exec postgres psql -U photoflow -d photoflow

View Database Logs

docker compose logs postgres | tail -100

Monitoring

Check Container Status

# Brief status
docker compose ps

# Detailed info
docker compose ps -a

# Resource usage
docker stats

Health Checks

# Check PostgreSQL health
docker compose exec postgres pg_isready -U photoflow

# Check app is responding
curl http://localhost:3000

Disk Usage

# Container sizes
docker compose images

# Volume sizes
docker volume ls
docker system df -v

Network Configuration

Expose to Local Network

Allow other PCs to connect:
  1. Configure firewall:
# Linux (ufw)
sudo ufw allow 3000/tcp

# Linux (firewalld)
sudo firewall-cmd --permanent --add-port=3000/tcp
sudo firewall-cmd --reload
  1. Update Socket.io URL in .env:
VITE_SOCKET_URL=http://192.168.1.100:3000
  1. Rebuild:
docker compose up -d --build

Change Ports

If port 3000 is in use:
services:
  app:
    ports:
      - "8080:3000"  # Host port : Container port
Access at http://localhost:8080

Reverse Proxy Setup

For HTTPS or custom domain, use nginx:
server {
    listen 80;
    server_name photoflow.local;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Troubleshooting

Check logs for errors:
docker compose logs app
docker compose logs postgres
Common issues:
  • Port already in use
  • Insufficient disk space
  • Permission issues
  • Invalid environment variables
Verify:
  1. PostgreSQL container is healthy:
docker compose ps
  1. Database credentials match:
docker compose exec app env | grep DATABASE_URL
  1. App can reach database:
docker compose exec app ping postgres
  1. Check firewall allows port 3000
  2. Verify VITE_SOCKET_URL uses server IP, not localhost
  3. Ensure containers are bound to 0.0.0.0, not 127.0.0.1
  4. Test with: curl http://[server-ip]:3000
If you used docker compose down -v, volumes were deleted.Verify volumes exist:
docker volume ls | grep photoflow
If missing, restore from backup.
Clean up Docker resources:
# Remove unused images
docker image prune -a

# Remove unused volumes (careful!)
docker volume prune

# Full cleanup
docker system prune -a --volumes

Performance Tuning

PostgreSQL Configuration

Optimize database performance:
postgres:
  command:
    - "postgres"
    - "-c"
    - "max_connections=100"
    - "-c"
    - "shared_buffers=256MB"
    - "-c"
    - "effective_cache_size=1GB"
    - "-c"
    - "work_mem=16MB"

Application Tuning

Adjust Node.js settings:
app:
  environment:
    - NODE_OPTIONS=--max-old-space-size=2048

Security Best Practices

Strong Passwords

Use complex, unique passwords for:
  • PostgreSQL user
  • Production environments
  • Never use defaults

Network Isolation

Don’t expose to internet:
  • Use internal networks only
  • Add authentication if needed
  • Use VPN for remote access

Regular Updates

Keep software updated:
  • Update base images
  • Apply security patches
  • Monitor CVE databases

Backup Strategy

Protect your data:
  • Automated daily backups
  • Store backups off-server
  • Test restore procedures

Automated Startup

Ensure PhotoFlow starts on boot:

Systemd Service (Linux)

Create /etc/systemd/system/photoflow.service:
[Unit]
Description=PhotoFlow Docker Compose Service
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/path/to/PhotoFlow
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down

[Install]
WantedBy=multi-user.target
Enable:
sudo systemctl enable photoflow
sudo systemctl start photoflow

Docker Desktop (Windows/macOS)

  1. Open Docker Desktop settings
  2. Enable “Start Docker Desktop when you log in”
  3. Containers set to restart: unless-stopped will start automatically

Next Steps

Coolify Deployment

Easier deployment with Coolify platform

Production Checklist

Ensure your deployment is production-ready

Network Setup

Configure multi-PC network access

Database Setup

Learn about database management

Build docs developers (and LLMs) love