Skip to main content

Overview

Mimir AIP can be deployed in two ways:
  • Docker Compose — Ideal for local development and testing. Quick to set up but does not support worker jobs.
  • Kubernetes with Helm — Full production deployment with worker job execution, horizontal scaling, and multi-cluster support.
This guide covers both deployment methods in detail.

Docker Compose Installation

Prerequisites

Docker

Version 20.10 or later

Docker Compose

Version 2.0 or later

Git

For cloning the repository

Installation Steps

1

Clone the Repository

git clone https://github.com/mimir-aip/mimir-aip-go
cd mimir-aip-go
2

Start the Services

docker compose up --build
This builds and starts:
  • Orchestrator on port 8080
  • Frontend on port 3000
  • Persistent volume for SQLite database
3

Verify Installation

curl http://localhost:8080/health
Expected response:
{
  "status": "healthy",
  "timestamp": "2026-03-01T12:34:56Z"
}
Limitation: Docker Compose deployment does not support worker jobs. Pipeline execution, ML training, inference, and digital twin synchronization require Kubernetes.

Configuration

Customize docker-compose.yaml to adjust settings:
docker-compose.yaml
services:
  orchestrator:
    environment:
      - ENVIRONMENT=production      # production or development
      - LOG_LEVEL=info             # debug, info, warn, error
      - PORT=8080
      - STORAGE_DIR=/app/data
      - MIN_WORKERS=1
      - MAX_WORKERS=10
      - QUEUE_THRESHOLD=5
    ports:
      - "8080:8080"
    volumes:
      - orchestrator-data:/app/data
VariableDefaultDescription
ENVIRONMENTproductionRuntime environment label
LOG_LEVELinfoLog verbosity level
PORT8080Orchestrator HTTP port
STORAGE_DIR/app/dataDirectory for SQLite database and file storage
MIN_WORKERS1Minimum concurrent worker jobs (Kubernetes only)
MAX_WORKERS10Maximum concurrent worker jobs (Kubernetes only)
QUEUE_THRESHOLD5Queued tasks before spinning up additional workers

Kubernetes Installation

Prerequisites

Kubernetes

Cluster version 1.25 or later

kubectl

Configured to access your cluster

Helm

Helm 3.0 or later installed
Verify your setup:
kubectl version --short
helm version --short

Installation Steps

1

Clone the Repository

git clone https://github.com/mimir-aip/mimir-aip-go
cd mimir-aip-go
2

Install with Helm

Install Mimir AIP using the provided Helm chart:
helm install mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  --create-namespace
The Helm chart deploys:
  • Orchestrator deployment with persistent storage
  • Frontend deployment
  • Services for orchestrator and frontend
  • RBAC resources (ServiceAccount, ClusterRole, ClusterRoleBinding)
  • NetworkPolicy resources (if enabled)
  • ConfigMap for worker cluster configuration
  • PersistentVolumeClaim for SQLite database (10Gi by default)
The chart uses public images from ghcr.io/mimir-aip — no manual build or registry authentication required.
3

Verify Installation

Check that all pods are running:
kubectl get pods -n mimir-aip
Expected output:
NAME                                      READY   STATUS    RESTARTS   AGE
mimir-aip-orchestrator-7d4f8b9c5d-xk7qm   1/1     Running   0          2m
mimir-aip-frontend-6b8f7c9d5e-pm3n8       1/1     Running   0          2m
4

Access the Services

Forward ports to access the services locally:
# Orchestrator API
kubectl port-forward -n mimir-aip svc/mimir-aip-orchestrator 8080:8080

# Web Frontend (in a separate terminal)
kubectl port-forward -n mimir-aip svc/mimir-aip-frontend 3000:80
Access the services at:

Configuration Options

Pin a Specific Version

helm install mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  --create-namespace \
  --set image.tag=0.1.1

Use Custom Storage Class

helm install mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  --create-namespace \
  --set orchestrator.persistence.storageClass=fast-ssd \
  --set orchestrator.persistence.size=50Gi

Custom Values File

Create a my-values.yaml file:
my-values.yaml
# Container image settings
image:
  registry: ghcr.io/mimir-aip
  tag: "0.1.1"                    # pin to specific version
  pullPolicy: IfNotPresent

# Orchestrator configuration
orchestrator:
  replicas: 1
  environment: production
  logLevel: debug                 # enable debug logging
  minWorkers: 2
  maxWorkers: 20                  # increase worker capacity
  queueThreshold: 8
  
  resources:
    requests:
      cpu: "1000m"
      memory: "2Gi"
    limits:
      cpu: "2000m"
      memory: "4Gi"
  
  persistence:
    enabled: true
    size: 50Gi
    storageClass: fast-ssd
    accessMode: ReadWriteOnce

# Frontend configuration
frontend:
  enabled: true
  replicas: 2                     # run multiple frontend instances
  serviceType: ClusterIP          # use with Ingress instead of LoadBalancer
  
  resources:
    requests:
      cpu: "100m"
      memory: "128Mi"
    limits:
      cpu: "200m"
      memory: "256Mi"

# RBAC and NetworkPolicy
rbac:
  create: true

networkPolicy:
  enabled: true

# Worker authentication token (optional)
workerAuthToken: "your-secure-token-here"
Install with your custom values:
helm install mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  --create-namespace \
  -f my-values.yaml

Multi-Cluster Configuration

Mimir AIP supports dispatching workers to multiple Kubernetes clusters for edge and cloud deployments.
Add remote clusters to your values file:
my-values.yaml
additionalClusters:
  - name: site-b
    orchestratorURL: http://192.168.10.5:8080
    maxWorkers: 50
    namespace: mimir-aip
    serviceAccount: mimir-worker
    kubeconfig: |
      apiVersion: v1
      kind: Config
      clusters:
        - name: site-b
          cluster:
            server: https://site-b.example.com:6443
            certificate-authority-data: LS0tLS...
      users:
        - name: mimir-worker
          user:
            token: eyJhbGci...
      contexts:
        - name: site-b
          context:
            cluster: site-b
            user: mimir-worker
      current-context: site-b

  - name: cloud-burst
    orchestratorURL: http://mimir-orchestrator.default.svc.cluster.local:8080
    maxWorkers: 100
    namespace: mimir-aip
    serviceAccount: mimir-worker
    kubeconfig: |
      apiVersion: v1
      kind: Config
      # ... cloud cluster configuration
Workers will overflow to these clusters in declaration order once the primary cluster reaches capacity.

Upgrading

Upgrade to a new release:
helm upgrade mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  --set image.tag=0.2.0
Or with a custom values file:
helm upgrade mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  -f my-values.yaml

Uninstalling

Uninstall Mimir AIP (PVC is retained by default):
helm uninstall mimir-aip --namespace mimir-aip
To also delete the namespace and PVC:
helm uninstall mimir-aip --namespace mimir-aip
kubectl delete namespace mimir-aip
Deleting the namespace will remove the PersistentVolumeClaim and all stored data. This action cannot be undone.

Building Custom Images

If you need to modify the source code and build custom images:

Prerequisites

Go

Version 1.21 or later

Docker

For building images

Build and Push Images

1

Set Registry Variable

export REGISTRY=ghcr.io/your-org
2

Build All Images

make build-all REGISTRY=$REGISTRY
This builds:
  • ${REGISTRY}/orchestrator:latest
  • ${REGISTRY}/worker:latest
  • ${REGISTRY}/frontend:latest
3

Push to Registry

# Login to your registry
docker login ghcr.io

# Push images
make push-all REGISTRY=$REGISTRY
4

Deploy with Custom Images

helm install mimir-aip ./helm/mimir-aip \
  --namespace mimir-aip \
  --create-namespace \
  --set image.registry=$REGISTRY \
  --set image.tag=latest

Individual Image Builds

make build-orchestrator REGISTRY=$REGISTRY TAG=v1.0.0

Local Development

Run components locally without Docker:
# Run orchestrator (requires Go)
make dev-orchestrator

# Run frontend server (in separate terminal)
make dev-frontend

Troubleshooting

Check pod logs:
kubectl logs -n mimir-aip -l app=orchestrator
kubectl logs -n mimir-aip -l app=frontend
Common issues:
  • Image pull errors: Verify registry and image tags
  • PVC pending: Check storage class and cluster storage provisioner
  • Crash loops: Check environment variables and database initialization
Check PVC status:
kubectl get pvc -n mimir-aip
kubectl describe pvc -n mimir-aip mimir-aip-orchestrator-data
Solutions:
  • Verify your cluster has a default storage class: kubectl get storageclass
  • Specify a storage class explicitly: --set orchestrator.persistence.storageClass=standard
  • Check storage provisioner logs
Verify RBAC permissions:
kubectl get clusterrole mimir-worker -n mimir-aip
kubectl get clusterrolebinding mimir-worker -n mimir-aip
kubectl get serviceaccount mimir-worker -n mimir-aip
Check orchestrator logs for job creation errors:
kubectl logs -n mimir-aip -l app=orchestrator | grep -i "worker\|job"
Check service status:
kubectl get svc -n mimir-aip
Verify pod is running:
kubectl get pods -n mimir-aip
Try direct pod port-forward:
kubectl port-forward -n mimir-aip pod/mimir-aip-orchestrator-xxx 8080:8080
Roll back to previous release:
helm rollback mimir-aip --namespace mimir-aip
Check release history:
helm history mimir-aip --namespace mimir-aip

Next Steps

Create Your First Project

Learn how to create and manage projects

Build a Pipeline

Design data ingestion and processing pipelines

MCP Integration

Connect AI agents to Mimir AIP

API Reference

Explore the REST API

Build docs developers (and LLMs) love