Skip to main content

Docker Deployment

Deploy Aya to production using Docker Compose with multi-stage builds for optimized image sizes.

Production Compose File

Aya includes compose.production.yml for production deployments:
compose.production.yml
name: aya-is-production

services:
  webclient:
    build:
      context: .
      dockerfile: apps/webclient/Dockerfile
      target: runner  # Production stage
      args:
        VITE_BACKEND_URI: https://api.aya.is
        VITE_HOST: https://aya.is
    environment:
      BACKEND_URI: http://services:8080  # Internal network
    restart: unless-stopped
    ports:
      - 3000:3000

  services:
    build:
      context: .
      dockerfile: apps/services/Dockerfile
      target: production-runner
    environment:
      ENV: production
      LOG__LEVEL: WARN
      CONN__targets__default__dsn: postgres://user:pass@postgres:5432/aya
    restart: unless-stopped
    ports:
      - 8080:8080
    depends_on:
      postgres:
        condition: service_healthy

  postgres:
    image: postgres:16-bookworm
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: aya
    volumes:
      - postgres-data:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  postgres-data:

Dockerfile Stages

Frontend Dockerfile

apps/webclient/Dockerfile
# Stage 1: Base - Install Deno
FROM denoland/deno:2.1.4 AS base
WORKDIR /app

# Stage 2: Dependencies
FROM base AS deps
COPY apps/webclient/deno.json apps/webclient/package.json ./
RUN deno install

# Stage 3: Builder
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY apps/webclient ./

ARG VITE_BACKEND_URI
ARG VITE_HOST
ENV VITE_BACKEND_URI=${VITE_BACKEND_URI}
ENV VITE_HOST=${VITE_HOST}

RUN deno task build

# Stage 4: Runner (Production)
FROM base AS runner
WORKDIR /app

ENV NODE_ENV=production

COPY --from=builder /app/.output /app/.output

EXPOSE 3000
CMD ["deno", "run", "--allow-all", ".output/server/index.mjs"]

Backend Dockerfile

apps/services/Dockerfile
# Stage 1: Base
FROM golang:1.25-bookworm AS base
WORKDIR /srv

# Stage 2: Dependencies
FROM base AS deps
COPY apps/services/go.mod apps/services/go.sum ./
RUN go mod download

# Stage 3: Builder
FROM base AS builder
COPY --from=deps /go/pkg /go/pkg
COPY apps/services ./

RUN CGO_ENABLED=0 GOOS=linux go build \
  -ldflags="-s -w" \
  -o /app/server \
  ./cmd/serve

# Stage 4: Production Runner (minimal)
FROM gcr.io/distroless/static-debian12:nonroot AS production-runner
WORKDIR /app

COPY --from=builder /app/server /app/server
COPY --from=builder /srv/etc /app/etc
COPY --from=builder /srv/config.json /app/config.json

EXPOSE 8080
CMD ["/app/server"]
Benefits:
  • Multi-stage builds reduce final image size
  • Frontend: ~200 MB (from ~1 GB with build dependencies)
  • Backend: ~30 MB (distroless base)
  • Build cache layers speed up rebuilds

Deployment Steps

1

Set environment variables

Create .env file for production secrets:
.env
# PostgreSQL
POSTGRES_PASSWORD=your-strong-password-here

# Backend (will be passed to services container)
AUTH__JWT_SECRET=your-jwt-secret-here
S3__ACCESS_KEY_ID=your-s3-key
S3__SECRET_ACCESS_KEY=your-s3-secret
RESEND__API_KEY=your-resend-api-key
Never commit .env to git. Use secrets management in production.
2

Build images

docker compose -f compose.production.yml build
This builds optimized production images for webclient and services.
3

Run migrations

# Start only PostgreSQL
docker compose -f compose.production.yml up -d postgres

# Wait for healthy state
docker compose -f compose.production.yml ps postgres

# Run migrations
docker compose -f compose.production.yml run --rm services \
  /app/server migrate default up
4

Start all services

docker compose -f compose.production.yml up -d
Services will start in order:
  1. PostgreSQL (waits for health check)
  2. Backend services (depends on postgres)
  3. Frontend webclient
5

Verify deployment

# Check all containers are running
docker compose -f compose.production.yml ps

# Check logs
docker compose -f compose.production.yml logs -f

# Test endpoints
curl http://localhost:8080/health
curl http://localhost:3000

Reverse Proxy Setup

In production, use a reverse proxy (nginx/Caddy/Traefik) for:
  • HTTPS/TLS termination
  • Rate limiting
  • Static file caching
  • Load balancing

Nginx Example

/etc/nginx/sites-available/aya.is
upstream backend {
    server localhost:8080;
}

upstream frontend {
    server localhost:3000;
}

server {
    listen 80;
    server_name aya.is www.aya.is;
    return 301 https://aya.is$request_uri;
}

server {
    listen 443 ssl http2;
    server_name aya.is;

    ssl_certificate /etc/letsencrypt/live/aya.is/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/aya.is/privkey.pem;

    # API requests
    location /api/ {
        proxy_pass http://backend/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Frontend
    location / {
        proxy_pass http://frontend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";  # WebSocket support
    }
}

Caddy Example (Simpler)

Caddyfile
aya.is {
    reverse_proxy /api/* localhost:8080
    reverse_proxy localhost:3000
}

api.aya.is {
    reverse_proxy localhost:8080
}
Caddy handles HTTPS automatically with Let’s Encrypt.

Health Checks

Implement health check endpoints:
Backend health check
func HealthCheck(c *gin.Context) {
    c.JSON(200, gin.H{
        "status": "ok",
        "timestamp": time.Now().Unix(),
    })
}

router.GET("/health", HealthCheck)
Use in Docker Compose:
services:
  services:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 40s

Monitoring

Logging

Configure structured JSON logging:
services:
  services:
    environment:
      LOG__LEVEL: INFO
      LOG__PRETTY: false  # JSON output for log aggregation
View logs:
# All logs
docker compose -f compose.production.yml logs -f

# Specific service
docker compose -f compose.production.yml logs -f services

# With timestamps
docker compose -f compose.production.yml logs -t -f

Resource Limits

Set CPU and memory limits:
services:
  services:
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          cpus: '0.5'
          memory: 512M

  webclient:
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 1G

Backup Strategy

PostgreSQL Backups

# Backup
docker compose -f compose.production.yml exec postgres pg_dump \
  -U postgres -d aya -F c -f /tmp/backup.dump

docker compose -f compose.production.yml cp \
  postgres:/tmp/backup.dump ./backups/aya-$(date +%Y%m%d).dump

# Restore
docker compose -f compose.production.yml cp \
  ./backups/aya-20240315.dump postgres:/tmp/restore.dump

docker compose -f compose.production.yml exec postgres pg_restore \
  -U postgres -d aya /tmp/restore.dump
Automate with cron:
#!/bin/bash
# /etc/cron.daily/backup-aya

BACKUP_DIR="/backups/aya"
DATE=$(date +%Y%m%d-%H%M%S)

docker compose -f /app/aya.is/compose.production.yml exec -T postgres \
  pg_dump -U postgres -d aya -F c > "$BACKUP_DIR/aya-$DATE.dump"

# Keep only last 30 days
find "$BACKUP_DIR" -name "aya-*.dump" -mtime +30 -delete

Scaling

Horizontal Scaling

Run multiple frontend/backend instances:
services:
  webclient:
    deploy:
      replicas: 3  # Run 3 instances

  services:
    deploy:
      replicas: 2  # Run 2 API instances
Use nginx/HAProxy for load balancing.

Database Optimization

-- Connection pooling config
ALTER SYSTEM SET max_connections = 200;
ALTER SYSTEM SET shared_buffers = '2GB';
ALTER SYSTEM SET effective_cache_size = '6GB';
ALTER SYSTEM SET work_mem = '16MB';

-- Reload config
SELECT pg_reload_conf();

Security Best Practices

Don’t put secrets in compose.yml. Use Docker secrets or external tools:
services:
  services:
    secrets:
      - jwt_secret
      - db_password

secrets:
  jwt_secret:
    external: true
  db_password:
    external: true
Create secrets:
echo "your-jwt-secret" | docker secret create jwt_secret -
Use distroless/minimal base images:
FROM gcr.io/distroless/static-debian12:nonroot
Or specify user:
USER 1000:1000
Only expose necessary ports:
services:
  postgres:
    ports: []  # No external exposure
    # Only accessible via internal network
# Pull latest base images
docker compose -f compose.production.yml pull

# Rebuild with latest patches
docker compose -f compose.production.yml build --pull

Troubleshooting

Check logs:
docker compose -f compose.production.yml logs services
Common causes:
  • Missing environment variables
  • Database connection failed
  • Port already in use
Increase container memory:
services:
  services:
    deploy:
      resources:
        limits:
          memory: 4G  # Increase from 2G
Verify PostgreSQL is healthy:
docker compose -f compose.production.yml ps postgres
# Should show "Up (healthy)"

# Test connection
docker compose -f compose.production.yml exec postgres \
  psql -U postgres -d aya -c "SELECT 1;"

Next Steps

Nix Deployment

Alternative deployment with Nix for reproducibility

Environment Variables

Complete reference for all configuration options

Database Guide

Learn about migrations and backups

Frontend Development

Understand build artifacts and SSR

Build docs developers (and LLMs) love