compose.yaml) includes additional services such as a local mail catcher, an ACME test server, and a Keycloak identity provider. The production setup (compose.prod.yaml) strips those down to the essential services.
Prerequisites
- Docker 20 or later
- Docker Compose v2 (
docker compose, notdocker-compose)
Services overview
postgres — primary database
postgres — primary database
PostgreSQL is Probo’s only persistent data store. The compose configuration mounts an init script that creates the
probod user and database on first start.- Image:
postgres:17.4(production) /postgres:18.1(dev) - Port:
5432 - Data volume:
postgres-data - Health check:
pg_isready
seaweedfs — S3-compatible file storage
seaweedfs — S3-compatible file storage
SeaweedFS provides an S3-compatible object storage endpoint used for uploaded files (evidence, policies, documents). In production you may replace it with AWS S3, MinIO, Cloudflare R2, or any S3-compatible service.
- Image:
chrislusf/seaweedfs:4.13 - S3 port:
8333 - Data volume:
seaweedfs-data - Credentials are configured in
compose/seaweedfs/s3.json
chrome — headless browser for PDF generation
chrome — headless browser for PDF generation
Probo uses headless Chrome to render compliance reports and trust center pages as PDFs. The DevTools Protocol port is exposed so
probod can connect.- Image:
chromedp/headless-shell:140.0.7259.2 - Port:
9222(Chrome DevTools Protocol)
mailpit — local email testing (dev only)
mailpit — local email testing (dev only)
Mailpit catches all outgoing SMTP email during development so you can inspect notifications without sending real messages.
- Image:
axllent/mailpit:latest - SMTP port:
1025 - Web UI:
http://localhost:8025
probod.notifications.mailer.pebble / pebble-challtestsrv — ACME test server (dev only)
pebble / pebble-challtestsrv — ACME test server (dev only)
Pebble is a local Let’s Encrypt-compatible ACME server for testing custom domain TLS certificate provisioning without hitting production rate limits.
- ACME directory:
https://localhost:14000/dir - Pebble challenge test server: port
8055(HTTP-01),8053(DNS)
probod.custom-domains.acme.directory at the real Let’s Encrypt endpoint.keycloak — identity provider (dev / test only)
keycloak — identity provider (dev / test only)
Keycloak provides a local SAML/OIDC identity provider for testing SSO flows during development. It is pre-configured with a
probo realm.- Image:
quay.io/keycloak/keycloak:latest - Port:
8082(mapped from internal8080) - Admin credentials:
admin/admin
Deploying with Docker Compose
Create a configuration file
Copy the example config and edit it for your environment:Update at minimum:
probod.base-url— your public URLprobod.encryption-key— a strong 32-byte base64 keyprobod.identity-and-access-management.password.pepper— a strong random stringprobod.identity-and-access-management.session.cookie.secret— a strong random stringprobod.pg— your database credentialsprobod.aws— your S3 storage credentials
Start the infrastructure
docker compose -f compose.yaml up -d and starts all services, including the observability stack. Wait for the health check on the postgres service to pass before proceeding.To stop all services:Build the binary
Build the full For backend-only builds (faster, skips frontend compilation):The binary is written to
probod binary (includes the frontend console and trust center apps):bin/probod.Using the production compose file
compose.prod.yaml defines a minimal production stack. It uses the published Docker image instead of a locally built binary and configures probod via environment variables.
| Variable | Description |
|---|---|
PROBOD_ENCRYPTION_KEY | 32-byte base64 encryption key |
AUTH_COOKIE_SECRET | Cookie signing secret (32+ bytes) |
AUTH_PASSWORD_PEPPER | Password hashing pepper (32+ bytes) |
TRUST_AUTH_TOKEN_SECRET | Trust center auth token secret |
PROBOD_BASE_URL | Public URL of your instance |
API_ADDR | API listen address |
API_CORS_ALLOWED_ORIGINS | Comma-separated allowed CORS origins |
Volume mounts and data persistence
| Volume | Service | Contents |
|---|---|---|
postgres-data | postgres | All relational data |
seaweedfs-data | seaweedfs | Uploaded files and object data |
probo-data | probo | Generated config (when using env-var bootstrap) |
postgres-data and seaweedfs-data regularly. Probo’s PostgreSQL database is the source of truth for all compliance data.
Health checks and restart policies
Thepostgres service has a Docker health check (pg_isready). The probo service in compose.prod.yaml declares a depends_on condition on postgres with service_healthy, so it will not start until the database is ready.
For production deployments, add restart: unless-stopped to each service to ensure automatic recovery after a host reboot:
Production considerations
Use an external PostgreSQL instance
Use an external PostgreSQL instance
For production, consider using a managed PostgreSQL service (e.g. AWS RDS, Cloud SQL, Supabase) instead of the containerized postgres service. Update
probod.pg.addr to point at the external host and remove the postgres service from your compose file.Use AWS S3 or a managed object store
Use AWS S3 or a managed object store
SeaweedFS is included for convenience. In production, replace it with a managed S3-compatible service. Set the
probod.aws section to your provider’s credentials and endpoint, and remove the seaweedfs service. If using native AWS S3, omit endpoint entirely.Remove dev-only services
Remove dev-only services
The following services from
compose.yaml are for development only and should not run in production:mailpit— replace with real SMTP (probod.notifications.mailer.smtp)pebble/pebble-challtestsrv— replace with Let’s Encrypt production ACME endpointkeycloak— replace with your real identity provider
TLS termination
TLS termination
Terminate TLS at a reverse proxy (nginx, Caddy, AWS ALB) in front of the
probod API port (8080). The trust center HTTPS listener (8443) handles its own TLS using certificates provisioned via ACME.Configuration reference
Complete reference for every configuration option.
Observability
Set up metrics, distributed tracing, and log aggregation.