Skip to main content
This guide covers deploying Hazel Chat to production, including Docker configuration, service orchestration, and environment setup.

Architecture Overview

Hazel Chat consists of multiple services that work together:

Core Services

ServicePortDescription
Web3000React frontend (Vite)
Backend3003Effect-TS API server
Cluster3020Distributed workflow service
Electric Proxy8787Cloudflare Worker for real-time sync
PostgreSQL5432Primary database
Electric3333Electric SQL sync engine
Redis6380Cache and rate limiting
MinIO9000S3-compatible object storage
Caddy5133, 3004Reverse proxy for SSE/WebSocket

Docker Setup

Docker Compose Configuration

The project includes a complete Docker Compose setup for development and production:
name: hazel
services:
  # PostgreSQL Database
  postgres:
    image: postgres:17-alpine
    shm_size: 1g
    restart: always
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER: user
      POSTGRES_DB: app
      POSTGRES_PASSWORD: password
    command: |
      postgres 
      -c wal_level=logical
      -c max_wal_senders=10 
      -c max_replication_slots=5 
      -c hot_standby=on 
      -c hot_standby_feedback=on
      -c max_connections=200
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./docker/postgres/init:/docker-entrypoint-initdb.d
    healthcheck:
      test: "pg_isready -U user --dbname=app"
      interval: 10s
      timeout: 5s
      retries: 5

  # Electric - Real-time sync for Postgres
  electric:
    image: electricsql/electric:canary
    ports:
      - "3333:3000"
    environment:
      DATABASE_URL: "postgresql://user:password@postgres:5432/app?sslmode=disable"
      ELECTRIC_INSECURE: "true"  # Only for development
      ELECTRIC_FEATURE_FLAGS: "allow_subqueries,tagged_subqueries"
    depends_on:
      postgres:
        condition: service_healthy
    restart: always

  # Redis for caching
  cache_redis:
    image: redis:7
    ports:
      - "6380:6379"
    command: ["redis-server", "--maxmemory", "256mb", "--maxmemory-policy", "allkeys-lru"]
    volumes:
      - cache_redis_data:/data
    restart: always

  # MinIO - S3-compatible object storage
  minio:
    image: minio/minio:latest
    ports:
      - "9000:9000"  # S3 API
      - "9001:9001"  # Console UI
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    command: server /data --console-address ":9001"
    volumes:
      - minio_data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: always

  # Caddy - Reverse proxy
  caddy:
    image: caddy:2-alpine
    ports:
      - "5133:5133"  # SSE proxy
      - "3004:3004"  # WebSocket proxy
    volumes:
      - ./Caddyfile.docker:/etc/caddy/Caddyfile
      - ./certs:/etc/caddy/certs:ro
      - caddy_data:/data
      - caddy_config:/config
    extra_hosts:
      - "host.docker.internal:host-gateway"
    restart: always

volumes:
  postgres_data:
  cache_redis_data:
  minio_data:
  caddy_data:
  caddy_config:

Starting Services

1

Start infrastructure services

Start PostgreSQL, Redis, Electric, and MinIO:
docker compose up -d
2

Wait for health checks

Wait for all services to become healthy:
docker compose ps
All services should show healthy or running status.
3

Initialize the database

Run migrations to set up the database schema:
cd packages/db
bun run db:push
4

Start application services

Start the backend, cluster, and web services:
bun run dev

PostgreSQL Configuration

Production Database Settings

For production, configure PostgreSQL with:
# WAL configuration for Electric SQL replication
wal_level=logical
max_wal_senders=10
max_replication_slots=5

# Performance tuning
shared_buffers=256MB
effective_cache_size=1GB
max_connections=200

# Monitoring
log_statement=all  # Only for debugging, disable in production
Electric SQL requires logical replication. Ensure wal_level=logical is set before running migrations.

Connection Pooling

For production deployments, use connection pooling:
# Use PgBouncer or similar
DATABASE_URL=postgresql://user:password@pgbouncer:6432/app?sslmode=verify-full

Environment Configuration

Production Environment Variables

Create production .env files for each service:
# Database
DATABASE_URL=postgresql://user:[email protected]:5432/hazel?sslmode=verify-full

# Cluster
CLUSTER_URL=https://cluster.example.com

# WorkOS Authentication
WORKOS_API_KEY=sk_live_your_api_key
WORKOS_CLIENT_ID=client_your_client_id
WORKOS_REDIRECT_URI=https://api.example.com/auth/callback
WORKOS_COOKIE_DOMAIN=example.com
WORKOS_WEBHOOK_SECRET=your_webhook_secret

# S3 Storage (Cloudflare R2)
S3_BUCKET=hazel-production
S3_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com
S3_ACCESS_KEY_ID=your_access_key
S3_SECRET_ACCESS_KEY=your_secret_key

# Electric SQL
ELECTRIC_URL=https://electric.example.com

# Redis
REDIS_URL=redis://redis.example.com:6379

# URLs
API_BASE_URL=https://api.example.com
FRONTEND_URL=https://app.example.com
See Environment Variables for a complete reference of all configuration options.

Service Orchestration

Build Commands

Build all services for production:
# Build all apps and packages
bun run build

# Build specific app
cd apps/backend
bun run build

Start Commands

cd apps/backend
bun run start
# Runs on port 3003

Process Management

Use PM2 or systemd to manage application processes:
{
  "apps": [
    {
      "name": "hazel-backend",
      "cwd": "./apps/backend",
      "script": "bun",
      "args": "run start",
      "env": {
        "NODE_ENV": "production"
      }
    },
    {
      "name": "hazel-cluster",
      "cwd": "./apps/cluster",
      "script": "bun",
      "args": "run start",
      "env": {
        "NODE_ENV": "production"
      }
    }
  ]
}

Cloud Deployment Options

Vercel (Frontend)

Deploy the web app to Vercel:
cd apps/web
vercel deploy --prod
Configure environment variables in the Vercel dashboard.

Railway (Backend + Database)

Deploy backend services to Railway:
1

Create Railway project

railway init
2

Add PostgreSQL

Add a PostgreSQL database from the Railway dashboard.
3

Configure services

Create services for backend and cluster in railway.toml.
4

Deploy

railway up

Cloudflare (Electric Proxy + R2)

cd apps/electric-proxy
wrangler deploy

Fly.io (All Services)

Deploy all services to Fly.io with flyctl:
# Initialize
fly launch

# Deploy
fly deploy

# Scale
fly scale count 2

Monitoring and Observability

OpenTelemetry

The backend includes OpenTelemetry instrumentation:
apps/backend/.env
OTLP_ENDPOINT=http://localhost:4318/v1/traces
Export traces to:
  • Honeycomb
  • Datadog
  • Grafana Tempo
  • Jaeger

Health Checks

Each service exposes health endpoints:
# Backend health
curl http://localhost:3003/health

# Cluster health
curl http://localhost:3020/health

# PostgreSQL
pg_isready -U user --dbname=app

Logging

Logs are written to stdout/stderr. Use a log aggregation service:
# View logs with Docker
docker compose logs -f backend

# View logs with PM2
pm2 logs hazel-backend

Security Checklist

1

Enable SSL/TLS

Use HTTPS for all services in production:
DATABASE_URL=postgresql://...?sslmode=verify-full
2

Rotate secrets

Never commit secrets to git. Use environment variables or secret management:
  • WorkOS API keys
  • Database passwords
  • S3 access keys
3

Configure CORS

Restrict CORS to your frontend domain:
const ALLOWED_ORIGINS = ['https://app.example.com']
4

Enable rate limiting

Redis-based rate limiting is included by default.
5

Set secure cookies

WORKOS_COOKIE_DOMAIN=example.com
6

Disable Electric insecure mode

# Remove this in production
# ELECTRIC_INSECURE="true"
Never use ELECTRIC_INSECURE=true in production. Configure proper authentication for Electric SQL. See Electric SQL Security Guide.

Backup and Recovery

Database Backups

Schedule regular PostgreSQL backups:
# Automated backup script
pg_dump -U user -d hazel -f backup-$(date +%Y%m%d).sql

# Compress and upload to S3
gzip backup-$(date +%Y%m%d).sql
aws s3 cp backup-$(date +%Y%m%d).sql.gz s3://backups/

Restore from Backup

# Download backup
aws s3 cp s3://backups/backup-20260304.sql.gz .

# Restore
gunzip backup-20260304.sql.gz
psql -U user -d hazel < backup-20260304.sql

Performance Optimization

Database Indexing

Ensure proper indexes are created during migrations:
CREATE INDEX idx_messages_channel_id ON messages(channel_id);
CREATE INDEX idx_channel_members_user_id ON channel_members(user_id);

Caching Strategy

Redis is used for:
  • Rate limiting
  • Session caching
  • Electric proxy caching
# Configure Redis memory policy
redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru

CDN for Static Assets

Use Cloudflare or a CDN for:
  • Frontend static files
  • R2 public assets
  • Avatar images

Troubleshooting

Common Issues

Check PostgreSQL is running and accessible:
pg_isready -h postgres.example.com -U user
Verify connection string format:
postgresql://user:password@host:5432/database?sslmode=verify-full
Verify Electric is connected to PostgreSQL:
curl http://localhost:3333/health
Check WAL level:
SHOW wal_level;  -- Should be 'logical'
Verify redirect URIs match in WorkOS dashboard:
WORKOS_REDIRECT_URI=https://api.example.com/auth/callback
Check API key is correct:
echo $WORKOS_API_KEY  # Should start with sk_live_

Next Steps

Environment Variables

Complete environment variable reference

Database Setup

Database schema and migrations

WorkOS Configuration

Authentication setup

Architecture

System architecture deep dive

Build docs developers (and LLMs) love