Skip to main content
Docker Compose provides a simple way to deploy Infrahub for development, testing, and single-node production environments. This page covers deployment patterns, configuration, and scaling.

Prerequisites

  • Docker (version 24.x minimum)
  • Docker Compose (version 2.x)
  • 8 GB RAM minimum (16 GB recommended for production)
  • 50 GB disk space minimum

Quick start

Deploy the latest Infrahub Community version:
curl https://infrahub.opsmill.io > docker-compose.yml
docker compose -p infrahub up -d
Access Infrahub at http://localhost:8000.

Docker Compose file structure

The Infrahub Docker Compose file defines the following services:

Services overview

ServiceImagePurposeDefault Port
infrahub-serverregistry.opsmill.io/opsmill/infrahubAPI server and UI8000
task-workerregistry.opsmill.io/opsmill/infrahubBackground task execution-
databaseneo4j:2025.10.1-communityGraph database7687
message-queuerabbitmq:4.2.1-managementMessage broker5672
cacheredis:8.4.0Cache and locking6379
task-managerregistry.opsmill.io/opsmill/infrahubPrefect workflow server4200
task-manager-dbpgautoupgrade/pgautoupgrade:18-alpinePrefect database5432

Environment variables

The Docker Compose file uses anchors to share environment variables across services:
x-infrahub-config: &infrahub_config
  INFRAHUB_DB_ADDRESS: ${INFRAHUB_DB_ADDRESS:-database}
  INFRAHUB_BROKER_ADDRESS: ${INFRAHUB_BROKER_ADDRESS:-message-queue}
  INFRAHUB_CACHE_ADDRESS: ${INFRAHUB_CACHE_ADDRESS:-cache}
  INFRAHUB_WORKFLOW_ADDRESS: ${INFRAHUB_WORKFLOW_ADDRESS:-task-manager}
  # ... additional variables
All services inherit these base configurations and can override specific values.

Configuration

Override default settings

Create a .env file in the same directory as docker-compose.yml:
.env
# Database settings
INFRAHUB_DB_PASSWORD=your-secure-password
INFRAHUB_DB_USERNAME=neo4j

# API settings
INFRAHUB_LOG_LEVEL=INFO
INFRAHUB_PRODUCTION=true
INFRAHUB_ALLOW_ANONYMOUS_ACCESS=false

# Security
INFRAHUB_INITIAL_ADMIN_TOKEN=your-secure-token
INFRAHUB_SECURITY_SECRET_KEY=your-secret-key

# Scaling
WEB_CONCURRENCY=8
Docker Compose automatically loads variables from this file.

Enable S3 storage

For production deployments with multiple API server replicas, configure S3 storage:
docker-compose.override.yml
services:
  infrahub-server:
    environment:
      INFRAHUB_STORAGE_DRIVER: s3
      AWS_ACCESS_KEY_ID: your-access-key
      AWS_SECRET_ACCESS_KEY: your-secret-key
      AWS_S3_BUCKET_NAME: infrahub-artifacts
      AWS_S3_ENDPOINT_URL: https://s3.amazonaws.com

Configure TLS for databases

Enable TLS for Neo4j, Redis, and RabbitMQ:
docker-compose.override.yml
services:
  infrahub-server:
    environment:
      INFRAHUB_DB_TLS_ENABLED: "true"
      INFRAHUB_DB_TLS_CA_FILE: /certs/ca.pem
      INFRAHUB_CACHE_TLS_ENABLED: "true"
      INFRAHUB_BROKER_TLS_ENABLED: "true"
    volumes:
      - ./certs:/certs:ro

Scaling

Scale API servers

Increase the number of API server replicas:
docker compose up -d --scale infrahub-server=3
Add a load balancer (HAProxy or NGINX) to distribute traffic. See the architecture page for load balancer configuration examples. Requirements when scaling API servers:
  • S3-compatible object storage must be configured
  • Shared volumes are not supported for multi-replica deployments

Scale task workers

Increase task worker replicas to handle more background tasks:
docker-compose.override.yml
services:
  task-worker:
    deploy:
      replicas: 4
Apply the changes:
docker compose up -d

Adjust worker concurrency

Control how many Gunicorn workers each API server runs:
.env
WEB_CONCURRENCY=8
Control how many concurrent messages each task worker processes:
.env
INFRAHUB_BROKER_MAXIMUM_CONCURRENT_MESSAGES=4

Persistent storage

Default volumes

The Docker Compose file creates the following named volumes:
volumes:
  database_data:      # Neo4j database files
  database_logs:      # Neo4j logs
  storage_data:       # Local artifact storage
  workflow_db:        # Prefect PostgreSQL data
  workflow_data:      # Prefect flow data

Backup volumes

Backup named volumes using Docker:
# Backup Neo4j database volume
docker run --rm \
  -v infrahub_database_data:/data \
  -v $(pwd):/backup \
  alpine tar czf /backup/database_data.tar.gz -C /data .

# Backup artifacts volume
docker run --rm \
  -v infrahub_storage_data:/data \
  -v $(pwd):/backup \
  alpine tar czf /backup/storage_data.tar.gz -C /data .

Use external volumes

Mount external directories for persistence:
docker-compose.override.yml
services:
  database:
    volumes:
      - /mnt/storage/neo4j/data:/data
      - /mnt/storage/neo4j/logs:/logs

  infrahub-server:
    volumes:
      - /mnt/storage/infrahub/artifacts:/opt/infrahub/storage

Network configuration

Expose additional ports

Expose Neo4j HTTP interface and RabbitMQ management UI:
docker-compose.override.yml
services:
  database:
    ports:
      - "7474:7474"  # Neo4j HTTP
      - "7687:7687"  # Neo4j Bolt

  message-queue:
    ports:
      - "15672:15672"  # RabbitMQ management UI

Use custom networks

Define custom networks for service isolation:
docker-compose.override.yml
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

services:
  infrahub-server:
    networks:
      - frontend
      - backend

  database:
    networks:
      - backend

Health checks

All services include health check configurations:
services:
  infrahub-server:
    healthcheck:
      test: curl -s -f -o /dev/null http://localhost:8000/api/config || exit 1
      interval: 5s
      timeout: 5s
      retries: 20
      start_period: 10s
Check service health:
docker compose ps
Healthy services show (healthy) in the status column.

Troubleshooting

View service logs

View logs for all services:
docker compose logs -f
View logs for a specific service:
docker compose logs -f infrahub-server

Restart services

Restart a specific service:
docker compose restart infrahub-server
Restart all services:
docker compose restart

Reset environment

Stop and remove all containers, networks, and volumes:
docker compose down -v
This removes all data. Ensure you have backups before running this command.

Service dependencies

Services start in dependency order defined by depends_on with health checks:
services:
  infrahub-server:
    depends_on:
      database:
        condition: service_healthy
      message-queue:
        condition: service_healthy
      cache:
        condition: service_healthy
      task-manager:
        condition: service_healthy
If a dependency is unhealthy, dependent services will not start.

Production recommendations

Security hardening

  1. Change default passwords and tokens
  2. Disable anonymous access: INFRAHUB_ALLOW_ANONYMOUS_ACCESS=false
  3. Enable TLS for all database connections
  4. Use secrets management instead of environment variables
  5. Restrict network access using firewalls

Performance tuning

  1. Increase Neo4j heap and page cache based on available RAM
  2. Increase WEB_CONCURRENCY for API servers
  3. Add more task worker replicas for background tasks
  4. Use SSD storage for Neo4j database volumes
  5. Configure S3 storage for artifacts

Monitoring

  1. Enable Prometheus metrics export
  2. Configure log aggregation (see monitoring page)
  3. Set up health check monitoring
  4. Monitor disk space usage
  5. Track container resource usage

Build docs developers (and LLMs) love