Skip to main content
Hatchet provides official Helm charts for Kubernetes deployments. This guide covers deploying the hatchet-stack chart for a standard production setup and the hatchet-ha chart for high-availability configurations.

Prerequisites

  • A Kubernetes cluster configured as the current context in kubectl
  • kubectl and helm installed

Components

The Helm chart deploys the following services:
ServiceDescription
hatchet-enginegRPC server for task scheduling and dispatch
hatchet-stack-apiREST API server
hatchet-stack-frontendWeb dashboard
postgresPostgreSQL instance (can be replaced by a managed DB)
rabbitmqRabbitMQ instance (can be replaced by a managed queue)

Setup

1

Add the Hatchet Helm repository

helm repo add hatchet https://hatchet-dev.github.io/hatchet-charts
helm repo update
2

Install the hatchet-stack chart

The default installation runs all components inside the cluster and exposes them through a Caddy reverse proxy for local access:
helm install hatchet-stack hatchet/hatchet-stack --set caddy.enabled=true
caddy.enabled=true adds a sidecar proxy that makes the dashboard accessible via port-forwarding during development. For production, configure an ingress instead — see the networking section below.
3

Access the dashboard

Port-forward the Caddy pod to access the dashboard locally:
export NAMESPACE=default
export POD_NAME=$(kubectl get pods --namespace $NAMESPACE -l "app=caddy" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace $NAMESPACE $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
kubectl --namespace $NAMESPACE port-forward $POD_NAME 8080:$CONTAINER_PORT
Then open http://localhost:8080 and log in:
Email:    [email protected]
Password: Admin123!!
Change the default credentials before exposing Hatchet to any external network.
4

Port-forward the engine gRPC port

To connect SDK workers from outside the cluster, port-forward the engine service:
export NAMESPACE=default
export POD_NAME=$(kubectl get pods --namespace $NAMESPACE -l "app.kubernetes.io/name=engine" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace $NAMESPACE $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
kubectl --namespace $NAMESPACE port-forward $POD_NAME 7070:$CONTAINER_PORT
The engine gRPC API is now available at localhost:7070.
5

Generate an API token

In the dashboard, navigate to SettingsAPI Tokens and click Generate API Token. Copy and store the token securely.
6

Connect your SDK

Set the following environment variables in your worker processes:
export HATCHET_CLIENT_TOKEN="<your-api-token>"
export HATCHET_CLIENT_HOST_PORT="localhost:7070"  # or the engine's external address
See the quickstart for a full example.

Key configuration

The sharedConfig section in your values.yaml controls the most important settings for all backend services:
values.yaml
sharedConfig:
  serverUrl: "https://hatchet.example.com"         # Public URL of the API/dashboard
  grpcBroadcastAddress: "engine.hatchet.example.com:443"  # External gRPC address for SDK connections
  grpcInsecure: "false"                             # Set to "true" only for development
  serverAuthCookieDomain: "hatchet.example.com"     # Match your public domain
  serverAuthCookieInsecure: "f"                     # Set to "t" only for HTTP (dev)
  defaultAdminEmail: "[email protected]"            # Change for production
  defaultAdminPassword: "Admin123!!"                # Change for production

External database

To use a managed Postgres instance (AWS RDS, Google Cloud SQL, etc.) instead of the bundled one, disable the chart’s Postgres and pass your connection string:
values.yaml
postgres:
  enabled: false

sharedConfig:
  env:
    DATABASE_URL: "postgresql://user:password@your-db-host:5432/hatchet?sslmode=require"

External message queue

To use a managed RabbitMQ instance, disable the chart’s RabbitMQ and provide your AMQP URL:
values.yaml
rabbitmq:
  enabled: false

sharedConfig:
  env:
    SERVER_MSGQUEUE_RABBITMQ_URL: "amqp://user:password@your-rabbitmq-host:5672/"
Alternatively, use Postgres as the message queue by setting:
values.yaml
sharedConfig:
  env:
    SERVER_MSGQUEUE_KIND: postgres

Networking

By default, no Hatchet services are exposed outside the cluster. For production, configure ingresses for the frontend/API and the engine separately. The following example exposes the dashboard at hatchet.example.com and the engine gRPC at engine.hatchet.example.com using nginx-ingress and cert-manager:
values.yaml
api:
  env:
    SERVER_AUTH_COOKIE_DOMAIN: "hatchet.example.com"
    SERVER_URL: "https://hatchet.example.com"
    SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
    SERVER_GRPC_INSECURE: "false"
    SERVER_GRPC_BROADCAST_ADDRESS: "engine.hatchet.example.com:443"

engine:
  env:
    SERVER_AUTH_COOKIE_DOMAIN: "hatchet.example.com"
    SERVER_URL: "https://hatchet.example.com"
    SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
    SERVER_GRPC_INSECURE: "false"
    SERVER_GRPC_BROADCAST_ADDRESS: "engine.hatchet.example.com:443"
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
    hosts:
      - host: engine.hatchet.example.com
        paths:
          - path: /
    tls:
      - hosts:
          - engine.hatchet.example.com
        secretName: engine-cert

frontend:
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
    hosts:
      - host: hatchet.example.com
        paths:
          - path: /api
            backend:
              serviceName: hatchet-api
              servicePort: 8080
          - path: /
            backend:
              serviceName: hatchet-frontend
              servicePort: 8080
    tls:
      - secretName: hatchet-tls
        hosts:
          - hatchet.example.com
The engine service uses gRPC (HTTP/2). Your ingress controller must support gRPC passthrough. With nginx-ingress, set nginx.ingress.kubernetes.io/backend-protocol: "GRPC" and nginx.ingress.kubernetes.io/ssl-redirect: "true".

Health check endpoints

The Hatchet API server exposes standard health check endpoints for Kubernetes liveness and readiness probes:
EndpointProbe typeReturns 200 when
GET /livezLivenessDatabase and message queue connections are alive
GET /readyzReadinessSame checks — service is ready to serve traffic
Both endpoints return 500 with error details if any dependency is unhealthy. Example probe configuration:
livenessProbe:
  httpGet:
    path: /livez
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 20

readinessProbe:
  httpGet:
    path: /readyz
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10

High availability

For high-throughput production environments, use the hatchet-ha chart, which splits the engine into independently-scalable components:
helm install hatchet-ha hatchet/hatchet-ha
The HA chart supports configuring replica counts for each component:
values.yaml
grpc:
  replicaCount: 4
controllers:
  replicaCount: 2
scheduler:
  replicaCount: 2
Running in production with a single-replica Postgres instance (the chart default) means there is no automatic failover. For production workloads, use a managed Postgres service with high-availability support, such as AWS RDS Multi-AZ or Google Cloud SQL with a standby replica.
For a complete HA deployment example using Terraform on GCP, see the hatchet-infra-examples repository.

Helm chart configuration reference

For the full list of configurable values for both hatchet-stack and hatchet-ha, see the Helm chart repository.

Build docs developers (and LLMs) love