hatchet-stack chart for a standard production setup and the hatchet-ha chart for high-availability configurations.
Prerequisites
- A Kubernetes cluster configured as the current context in
kubectl kubectlandhelminstalled
Components
The Helm chart deploys the following services:| Service | Description |
|---|---|
hatchet-engine | gRPC server for task scheduling and dispatch |
hatchet-stack-api | REST API server |
hatchet-stack-frontend | Web dashboard |
postgres | PostgreSQL instance (can be replaced by a managed DB) |
rabbitmq | RabbitMQ instance (can be replaced by a managed queue) |
Setup
Install the hatchet-stack chart
The default installation runs all components inside the cluster and exposes them through a Caddy reverse proxy for local access:
caddy.enabled=true adds a sidecar proxy that makes the dashboard accessible via port-forwarding during development. For production, configure an ingress instead — see the networking section below.Access the dashboard
Port-forward the Caddy pod to access the dashboard locally:Then open http://localhost:8080 and log in:
Port-forward the engine gRPC port
To connect SDK workers from outside the cluster, port-forward the engine service:The engine gRPC API is now available at
localhost:7070.Generate an API token
In the dashboard, navigate to Settings → API Tokens and click Generate API Token. Copy and store the token securely.
Connect your SDK
Set the following environment variables in your worker processes:See the quickstart for a full example.
Key configuration
ThesharedConfig section in your values.yaml controls the most important settings for all backend services:
values.yaml
External database
To use a managed Postgres instance (AWS RDS, Google Cloud SQL, etc.) instead of the bundled one, disable the chart’s Postgres and pass your connection string:values.yaml
External message queue
To use a managed RabbitMQ instance, disable the chart’s RabbitMQ and provide your AMQP URL:values.yaml
values.yaml
Networking
By default, no Hatchet services are exposed outside the cluster. For production, configure ingresses for the frontend/API and the engine separately. The following example exposes the dashboard athatchet.example.com and the engine gRPC at engine.hatchet.example.com using nginx-ingress and cert-manager:
values.yaml
The engine service uses gRPC (HTTP/2). Your ingress controller must support
gRPC passthrough. With nginx-ingress, set
nginx.ingress.kubernetes.io/backend-protocol: "GRPC" and
nginx.ingress.kubernetes.io/ssl-redirect: "true".Health check endpoints
The Hatchet API server exposes standard health check endpoints for Kubernetes liveness and readiness probes:| Endpoint | Probe type | Returns 200 when |
|---|---|---|
GET /livez | Liveness | Database and message queue connections are alive |
GET /readyz | Readiness | Same checks — service is ready to serve traffic |
500 with error details if any dependency is unhealthy.
Example probe configuration:
High availability
For high-throughput production environments, use thehatchet-ha chart, which splits the engine into independently-scalable components:
values.yaml
Helm chart configuration reference
For the full list of configurable values for bothhatchet-stack and hatchet-ha, see the Helm chart repository.