Skip to main content

Overview

Open Wearables uses Docker Compose to orchestrate multiple services including the FastAPI backend, PostgreSQL database, Redis cache, Celery workers, and React frontend. This guide covers both development and production deployments.

Architecture

The platform consists of seven containerized services:
ServiceContainerPortDescription
PostgreSQLpostgres__open-wearables5432Primary database
Redisredis__open-wearables6379Cache and message broker
Backend APIbackend__open-wearables8000FastAPI application
Celery Workercelery-worker__open-wearables-Background task processor
Celery Beatcelery-beat__open-wearables-Scheduled task scheduler
Flowerflower__open-wearables5555Celery monitoring UI
Frontendfrontend__open-wearables3000React application

Quick Start

1

Clone the repository

git clone https://github.com/your-org/open-wearables.git
cd open-wearables
2

Configure environment variables

Copy the example environment file and customize it:
cp backend/config/.env.example backend/config/.env
See Environment Variables for configuration details.
3

Start all services

docker compose up -d
This will:
  • Build the backend and frontend images
  • Start all seven services
  • Apply database migrations automatically
  • Create the default admin account ([email protected] / your-secure-password)
4

Verify deployment

Access the services:Check service health:
docker compose ps
docker compose logs -f app

Service Configuration

Database (PostgreSQL)

The PostgreSQL service includes health checks to ensure availability before dependent services start:
docker-compose.yml
db:
  image: postgres:18
  environment:
    POSTGRES_DB: open-wearables
    POSTGRES_USER: open-wearables
    POSTGRES_PASSWORD: open-wearables
  ports:
    - "5432:5432"
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U open-wearables -d open-wearables"]
    interval: 5s
    timeout: 5s
    retries: 5
  volumes:
    - postgres_data:/var/lib/postgresql
In production, always change the default database credentials and use strong passwords.

Backend API

The main FastAPI application depends on both database and Redis services:
docker-compose.yml
app:
  build:
    context: ./backend
    dockerfile: Dockerfile
  command: scripts/start/app.sh
  env_file:
    - ./backend/config/.env
  environment:
    - DB_HOST=db
    - REDIS_HOST=redis
  ports:
    - "8000:8000"
  depends_on:
    db:
      condition: service_healthy
    redis:
      condition: service_started
  restart: on-failure
Key features:
  • Waits for database health check before starting
  • Loads environment variables from .env file
  • Auto-restarts on failure
  • Hot-reload support in development mode

Celery Worker

Processes background tasks for data syncing, webhooks, and scheduled jobs:
docker-compose.yml
celery-worker:
  image: open-wearables-platform:latest
  command: scripts/start/worker.sh
  env_file:
    - ./backend/config/.env
  environment:
    - DB_HOST=db
    - REDIS_HOST=redis
  depends_on:
    - redis
    - db
    - app
The Celery worker uses the same Docker image as the backend API to ensure consistency.

Celery Beat

Schedules periodic tasks like automatic data synchronization:
docker-compose.yml
celery-beat:
  image: open-wearables-platform:latest
  command: scripts/start/beat.sh
  env_file:
    - ./backend/config/.env
  environment:
    - DB_HOST=db
    - REDIS_HOST=redis
  depends_on:
    - redis
    - db
    - app
Default scheduled tasks:
  • Automatic user data sync every hour (configurable via SYNC_INTERVAL_SECONDS)
  • Sleep data processing every hour (configurable via SLEEP_SYNC_INTERVAL_SECONDS)

Redis

Serves as both cache and message broker for Celery:
docker-compose.yml
redis:
  image: redis:8
  ports:
    - "6379:6379"
  volumes:
    - redis_data:/var/lib/redis/data
For production deployments, enable Redis authentication by setting REDIS_PASSWORD in your environment configuration.

Flower (Celery Monitoring)

Provides a web UI for monitoring Celery tasks:
docker-compose.yml
flower:
  image: open-wearables-platform:latest
  command: scripts/start/flower.sh
  env_file:
    - ./backend/config/.env
  environment:
    - DB_HOST=db
    - REDIS_HOST=redis
  ports:
    - "5555:5555"
  depends_on:
    - redis
    - db
    - app
Access Flower at http://localhost:5555 to view:
  • Active tasks and workers
  • Task history and statistics
  • Worker performance metrics

Frontend

React application served with Vite in development mode:
docker-compose.yml
frontend:
  build:
    context: ./frontend
    dockerfile: Dockerfile.dev
  ports:
    - "3000:3000"
  depends_on:
    - app
  restart: on-failure

Development Workflow

Hot Reload

The Docker Compose configuration includes watch mode for automatic reloading:
# Start with watch mode enabled
docker compose watch
Watch configuration:
  • Backend code changes (backend/app/) trigger sync without restart
  • Migration changes (backend/migrations/) trigger sync and restart
  • Environment file changes (.env) trigger sync and restart
  • Dependency changes (uv.lock) trigger full rebuild
  • Frontend code changes (frontend/src/) trigger sync without restart

Useful Commands

# All services
docker compose logs -f

# Specific service
docker compose logs -f app
docker compose logs -f celery-worker

# Last 100 lines
docker compose logs --tail=100 app
# Run migrations
docker compose exec app alembic upgrade head

# Create migration
docker compose exec app alembic revision --autogenerate -m "Add new table"

# Access Python shell
docker compose exec app python

# Run tests
docker compose exec app pytest
# Rebuild all images
docker compose build

# Rebuild specific service
docker compose build app

# Rebuild and restart
docker compose up -d --build
# Stop all services
docker compose down

# Remove volumes (deletes all data)
docker compose down -v

# Start fresh
docker compose up -d

Production Deployment

Prerequisites

1

Production environment file

Create a production .env file with secure credentials:
cp backend/config/.env.example backend/config/.env.production
Update critical settings:
  • ENVIRONMENT=production
  • Generate secure SECRET_KEY (see Environment Variables)
  • Set strong database password
  • Enable Redis authentication
  • Configure CORS origins
  • Set up email service (Resend API key)
  • Configure Sentry for error tracking
2

Production Docker Compose override

Create docker-compose.prod.yml:
docker-compose.prod.yml
services:
  app:
    env_file:
      - ./backend/config/.env.production
    restart: always
    
  celery-worker:
    env_file:
      - ./backend/config/.env.production
    restart: always
    deploy:
      replicas: 3  # Scale workers as needed
    
  celery-beat:
    env_file:
      - ./backend/config/.env.production
    restart: always
    
  flower:
    env_file:
      - ./backend/config/.env.production
    restart: always
    # Add authentication for Flower in production
    
  db:
    restart: always
    # Consider using managed PostgreSQL service
    
  redis:
    restart: always
    command: redis-server --requirepass ${REDIS_PASSWORD}
    # Consider using managed Redis service
    
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile  # Production Dockerfile
    restart: always
3

Deploy with production configuration

# Deploy with production overrides
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

Production Best Practices

Security Checklist:
  • Never commit .env files to version control
  • Use strong, randomly generated passwords
  • Enable Redis authentication
  • Configure CORS to only allow your frontend domain
  • Set up HTTPS with a reverse proxy (nginx, Caddy, or Traefik)
  • Regularly update Docker images for security patches
  • Enable Sentry or another error tracking service
  • Implement rate limiting and request validation
Recommended production setup:
  1. Use managed services for PostgreSQL and Redis (AWS RDS, Azure Database, etc.)
  2. Reverse proxy with SSL/TLS termination (nginx, Caddy, Traefik)
  3. Container orchestration (Kubernetes, ECS, or Docker Swarm) for scaling
  4. Monitoring with Prometheus + Grafana or cloud-native solutions
  5. Log aggregation with ELK stack or cloud logging services
  6. Backup strategy for database and persistent volumes
  7. CI/CD pipeline for automated testing and deployment

Scaling

# Scale to 5 workers
docker compose up -d --scale celery-worker=5

# Or in production compose file
services:
  celery-worker:
    deploy:
      replicas: 5
When scaling the API behind a load balancer:
docker-compose.prod.yml
services:
  app:
    deploy:
      replicas: 3
    # Remove port mapping if using reverse proxy
    # ports:
    #   - "8000:8000"
Configure your load balancer (nginx, Traefik, etc.) to distribute traffic.

Health Checks and Monitoring

The API provides health check endpoints:
# Check API health
curl http://localhost:8000/health

# Check database connectivity
curl http://localhost:8000/api/v1/health/db

# Check Redis connectivity
curl http://localhost:8000/api/v1/health/redis
Configure your orchestration platform to use these endpoints for health monitoring and automatic recovery.

Troubleshooting

Check logs:
docker compose logs app
Common issues:
  • Database not ready: Wait for health check to pass
  • Port already in use: Change port mapping in docker-compose.yml
  • Environment variables missing: Check .env file exists and has required values
Verify database is running:
docker compose ps db
Check database logs:
docker compose logs db
Test connection manually:
docker compose exec db psql -U open-wearables -d open-wearables
Check worker status:
docker compose logs celery-worker
Verify Redis connection:
docker compose exec redis redis-cli ping
Check Flower for task queue: Open http://localhost:5555 and verify workers are active
Clean up Docker resources:
# Remove unused containers, networks, images
docker system prune -a

# Remove unused volumes (WARNING: deletes data)
docker volume prune

Next Steps

Environment Variables

Configure your deployment with environment variables

Database Setup

Advanced database configuration and migrations

Build docs developers (and LLMs) love