Skip to main content

Overview

Docker deployment provides a containerized, portable solution for running Kuest Prediction Market. This guide covers both local development and production configurations using Docker Compose.
Docker deployment is ideal for self-hosting, VPS deployments, and environments where you need full control over the infrastructure.

Prerequisites

  1. Docker Engine and Docker Compose plugin installed
  2. Access to the Kuest repository
  3. Configured environment variables (see Environment Setup)
  4. Storage backend choice: Supabase or Postgres + S3

Installation

Install Docker

sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Dockerfile

The production Dockerfile uses a multi-stage build for optimal image size:
FROM node:24-bookworm-slim AS build
WORKDIR /app

ENV NEXT_TELEMETRY_DISABLED=1

RUN apt-get update \
  && apt-get install -y --no-install-recommends python3 make g++ \
  && rm -rf /var/lib/apt/lists/*

COPY package.json package-lock.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM node:24-bookworm-slim AS runner
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
ENV HOSTNAME=0.0.0.0
ENV PORT=3000

RUN groupadd --gid 1001 nodejs
RUN useradd --uid 1001 --gid nodejs --create-home nextjs

COPY --from=build --chown=nextjs:nodejs /app/public ./public
COPY --from=build --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=build --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=build --chown=nextjs:nodejs /app/node_modules/postgres ./node_modules/postgres
COPY --from=build --chown=nextjs:nodejs /app/package.json ./package.json
COPY --from=build --chown=nextjs:nodejs /app/scripts/migrate.js ./scripts/migrate.js
COPY --from=build --chown=nextjs:nodejs /app/src/lib/site-url.js ./src/lib/site-url.js
COPY --from=build --chown=nextjs:nodejs /app/src/lib/db/migrations ./src/lib/db/migrations

USER nextjs

EXPOSE 3000

CMD ["node", "server.js"]

Environment Setup

Create a .env file in your repository root:
# ============================================================
# REQUIRED
# ============================================================

KUEST_ADDRESS=""
KUEST_API_KEY=""
KUEST_API_SECRET=""
KUEST_PASSPHRASE=""
ADMIN_WALLETS=""
REOWN_APPKIT_PROJECT_ID=""
BETTER_AUTH_SECRET=""
CRON_SECRET=""

# Site URL (required for production)
SITE_URL="https://markets.example.com"

# ============================================================
# STORAGE MODE: Choose one
# ============================================================

# Option A: Supabase mode
POSTGRES_URL="postgresql://..."
SUPABASE_URL="https://xxx.supabase.co"
SUPABASE_SERVICE_ROLE_KEY="eyJhbGc..."

# Option B: Postgres + S3 mode
# POSTGRES_URL="postgresql://..."
# S3_BUCKET="kuest-assets"
# S3_ACCESS_KEY_ID=""
# S3_SECRET_ACCESS_KEY=""
# S3_ENDPOINT="https://s3.amazonaws.com"
# S3_REGION="us-east-1"
# S3_PUBLIC_URL=""
# S3_FORCE_PATH_STYLE="false"

Local Development

Use the local compose configuration for development:
1
Start Local Services
2
With External Database
docker compose --env-file .env -f infra/docker/docker-compose.yml up --build
With Local PostgreSQL
docker compose --env-file .env -f infra/docker/docker-compose.yml --profile local-postgres up --build
3
Configure Local PostgreSQL (if using local-postgres profile)
4
Add to your .env:
5
POSTGRES_DB=kuest
POSTGRES_USER=kuest
POSTGRES_PASSWORD=replace-with-strong-password
POSTGRES_URL=postgresql://kuest:replace-with-strong-password@postgres:5432/kuest?sslmode=disable
6
Run Database Migrations
7
After containers start, apply migrations:
8
docker compose --env-file .env -f infra/docker/docker-compose.yml exec web npm run db:push
9
Access the Application
10
Open http://localhost:3000 in your browser.

Production Deployment

The production compose file includes Caddy for automatic HTTPS:
name: kuest-production

services:
  web:
    container_name: kuest-web
    build:
      context: ../..
      dockerfile: infra/docker/Dockerfile
    image: kuest-web:production-local
    env_file:
      - ../../.env
    environment:
      NODE_ENV: production
      SITE_URL: ${SITE_URL:?SITE_URL is required}
    restart: unless-stopped
    healthcheck:
      test: [CMD, node, -e, "fetch('http://127.0.0.1:3000').then((r)=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"]
      interval: 30s
      timeout: 5s
      retries: 6
      start_period: 40s
    stop_grace_period: 30s
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: '3'
    security_opt:
      - no-new-privileges:true

  caddy:
    container_name: kuest-caddy
    image: caddy:2.10-alpine
    depends_on:
      web:
        condition: service_healthy
    command:
      - caddy
      - reverse-proxy
      - --from
      - ${CADDY_DOMAIN:-markets.example.com}
      - --to
      - web:3000
    ports:
      - '80:80'
      - '443:443'
    restart: unless-stopped
    volumes:
      - kuest-caddy-data:/data
      - kuest-caddy-config:/config
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: '3'
    security_opt:
      - no-new-privileges:true

  postgres:
    profiles: [local-postgres]
    container_name: kuest-postgres
    image: postgres:17-alpine
    environment:
      POSTGRES_DB: ${POSTGRES_DB:-kuest}
      POSTGRES_USER: ${POSTGRES_USER:-kuest}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD-}
    volumes:
      - kuest-postgres-data:/var/lib/postgresql/data
    restart: unless-stopped
    healthcheck:
      test: [CMD-SHELL, 'pg_isready -U ${POSTGRES_USER:-kuest} -d ${POSTGRES_DB:-kuest}']
      interval: 10s
      timeout: 5s
      retries: 6

volumes:
  kuest-postgres-data:
  kuest-caddy-data:
  kuest-caddy-config:
1
Configure Domain
2
Add to your .env:
3
CADDY_DOMAIN=markets.example.com
SITE_URL=https://markets.example.com
4
Start Production Stack
5
Supabase Mode
docker compose --env-file .env -f infra/docker/docker-compose.production.yml up -d --build
Postgres+S3 with Local PostgreSQL
docker compose --env-file .env -f infra/docker/docker-compose.production.yml --profile local-postgres up -d --build
6
Run Database Migrations
7
docker compose --env-file .env -f infra/docker/docker-compose.production.yml exec web npm run db:push
8
If using local-postgres profile, wait until the postgres container is healthy before running migrations.
9
Verify Deployment
10
# Check container status
docker compose -f infra/docker/docker-compose.production.yml ps

# View logs
docker compose -f infra/docker/docker-compose.production.yml logs -f web
docker compose -f infra/docker/docker-compose.production.yml logs -f caddy

Operations

Update Application

git pull
docker compose --env-file .env -f infra/docker/docker-compose.production.yml up -d --build
docker compose --env-file .env -f infra/docker/docker-compose.production.yml exec web npm run db:push

View Logs

# All services
docker compose -f infra/docker/docker-compose.production.yml logs -f

# Specific service
docker compose -f infra/docker/docker-compose.production.yml logs -f web
docker compose -f infra/docker/docker-compose.production.yml logs -f caddy
docker compose -f infra/docker/docker-compose.production.yml logs -f postgres

Restart Services

# Restart all
docker compose -f infra/docker/docker-compose.production.yml restart

# Restart specific service
docker compose -f infra/docker/docker-compose.production.yml restart web

Stop Services

# Stop (preserves data)
docker compose -f infra/docker/docker-compose.production.yml stop

# Stop and remove containers (preserves volumes)
docker compose -f infra/docker/docker-compose.production.yml down

# Stop and remove everything including volumes
docker compose -f infra/docker/docker-compose.production.yml down -v

Storage Modes

Supabase Mode

Required environment variables:
POSTGRES_URL=postgresql://...
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_SERVICE_ROLE_KEY=eyJhbGc...
Supabase handles:
  • PostgreSQL database
  • Object storage (kuest-assets bucket)
  • Built-in cron scheduler

Postgres + S3 Mode

Required environment variables:
POSTGRES_URL=postgresql://...
S3_BUCKET=kuest-assets
S3_ACCESS_KEY_ID=xxx
S3_SECRET_ACCESS_KEY=xxx
Optional S3 settings:
S3_ENDPOINT=https://s3.amazonaws.com
S3_REGION=us-east-1
S3_PUBLIC_URL=https://cdn.example.com
S3_FORCE_PATH_STYLE=false
When using Postgres + S3 mode (not Supabase), you must implement the scheduler contract for /api/sync/* endpoints. See the scheduler documentation for details.

Troubleshooting

Container Won’t Start

# Check logs
docker compose -f infra/docker/docker-compose.production.yml logs web

# Verify environment variables
docker compose -f infra/docker/docker-compose.production.yml config

# Check container status
docker compose -f infra/docker/docker-compose.production.yml ps -a

Database Connection Failed

  1. Verify POSTGRES_URL is correct
  2. Check postgres container is healthy: docker ps
  3. Test connection: docker exec kuest-postgres pg_isready
  4. Review postgres logs: docker logs kuest-postgres

Caddy SSL Issues

  1. Ensure ports 80 and 443 are open
  2. Verify DNS points to your server
  3. Check Caddy logs: docker logs kuest-caddy
  4. Verify CADDY_DOMAIN matches your DNS

Out of Memory

# Check container resource usage
docker stats

# Increase container memory limits in docker-compose.yml:
services:
  web:
    deploy:
      resources:
        limits:
          memory: 2G

Security Best Practices

  1. Use secrets management: Store sensitive environment variables in Docker secrets or external vault
  2. Regular updates: Keep base images updated (docker pull regularly)
  3. Non-root user: The Dockerfile already runs as non-root user nextjs
  4. Network isolation: Use Docker networks to isolate services
  5. Volume backups: Regularly backup PostgreSQL and Caddy volumes
  6. Security scanning: Use docker scan to check for vulnerabilities

Performance Optimization

Use Build Cache

# Build with cache
docker compose build --parallel

# No cache (clean build)
docker compose build --no-cache

Resource Limits

Add resource limits in docker-compose.production.yml:
services:
  web:
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          cpus: '1'
          memory: 1G

Volume Performance

For better I/O on production:
volumes:
  kuest-postgres-data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /mnt/ssd/postgres-data

Next Steps

Build docs developers (and LLMs) love