These instructions deploy Trigger.dev to Kubernetes using the official Helm chart. Read the self-hosting overview before continuing.
This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and
reliability concerns are not fully addressed here.
Requirements
Prerequisites
Kubernetes 1.19+
Helm 3.8+
kubectl with cluster access
Cluster resources
Minimum requirements for the full stack:
6+ vCPU total
12+ GB RAM total
Persistent volume support
Per-component minimums:
Component CPU RAM Webapp 1 vCPU 2 GB Supervisor 1 vCPU 1 GB PostgreSQL 1 vCPU 2 GB Redis 0.5 vCPU 1 GB ClickHouse 1 vCPU 2 GB Object Storage 0.5 vCPU 1 GB Workers Depends on concurrency Depends on concurrency
Adjust via the resources section in your values.yaml:
webapp :
resources :
requests :
cpu : 500m
memory : 1Gi
limits :
cpu : 2000m
memory : 4Gi
Installation
Quick start
Install with default values
helm upgrade -n trigger --install trigger \
oci://ghcr.io/triggerdotdev/charts/trigger \
--version "~4.0.0" \
--create-namespace
Access the webapp
kubectl port-forward svc/trigger-webapp 3040:3030 -n trigger
Open http://localhost:3040 in your browser.
Get the magic link
kubectl logs -n trigger deployment/trigger-webapp | grep -A1 "magic link"
Configuration
Most Helm values map directly to the environment variables documented in the webapp and supervisor environment variable references in the official Trigger.dev Docker Compose configuration.
Naming convention: env vars use UPPER_SNAKE_CASE; Helm values use camelCase.
# Environment variable
APP_ORIGIN = https://trigger.example.com
# Helm value equivalent
config:
appOrigin: "https://trigger.example.com"
Viewing default values
# Latest v4
helm show values oci://ghcr.io/triggerdotdev/charts/trigger \
--version "~4.0.0"
# Specific version
helm show values oci://ghcr.io/triggerdotdev/charts/trigger \
--version "4.0.5"
Custom values
Create a values-custom.yaml to override defaults. The defaults are insecure and only suitable for testing:
# Recommended: use an existing Kubernetes secret
# Must contain: SESSION_SECRET, MAGIC_LINK_SECRET, ENCRYPTION_KEY,
# MANAGED_WORKER_SECRET, OBJECT_STORE_ACCESS_KEY_ID, OBJECT_STORE_SECRET_ACCESS_KEY
secrets :
enabled : false
existingSecret : "trigger-secrets"
# Application URLs
config :
appOrigin : "https://trigger.example.com"
loginOrigin : "https://trigger.example.com"
apiOrigin : "https://trigger.example.com"
# Resource limits
webapp :
resources :
requests :
cpu : 1000m
memory : 2Gi
limits :
cpu : 2000m
memory : 4Gi
supervisor :
resources :
requests :
cpu : 200m
memory : 512Mi
limits :
cpu : 1000m
memory : 2Gi
Deploy with your custom values:
helm upgrade -n trigger --install trigger \
oci://ghcr.io/triggerdotdev/charts/trigger \
--version "~4.0.0" \
--create-namespace \
-f values-custom.yaml
webapp :
extraEnvVars :
- name : EXTRA_ENV_VAR
value : "extra-value"
webapp :
podAnnotations :
"my-annotation" : "my-value"
External services
Disable built-in services and use external ones instead. Using Kubernetes secrets is recommended for credentials.
PostgreSQL
Direct configuration
Existing secret (recommended)
postgres :
deploy : false
external :
databaseUrl : "postgresql://user:password@host:5432/database?schema=public"
directUrl : "" # Optional, defaults to databaseUrl
Redis
Direct configuration
Existing secret (recommended)
redis :
deploy : false
external :
host : "my-redis.example.com"
port : 6379
password : "my-password"
tls :
enabled : true
ClickHouse
Direct configuration
Existing secret (recommended)
clickhouse :
deploy : false
external :
host : "my-clickhouse.example.com"
port : 8123
username : "my-username"
password : "my-password"
S3-compatible object storage
Direct configuration
Existing secret (recommended)
minio :
deploy : false
s3 :
external :
endpoint : "https://s3.amazonaws.com"
accessKeyId : "my-access-key"
secretAccessKey : "my-secret-key"
PostgreSQL with custom CA certificate
For PostgreSQL instances that require custom CA certificates (e.g., AWS RDS with SSL verification):
postgres :
deploy : false
external :
existingSecret : "postgres-credentials"
connection :
sslMode : "require"
webapp :
extraEnvVars :
- name : NODE_EXTRA_CA_CERTS
value : "/etc/ssl/certs/postgres-ca.crt"
extraVolumes :
- name : postgres-ca-cert
secret :
secretName : postgres-ca-secret
items :
- key : ca.crt
path : postgres-ca.crt
extraVolumeMounts :
- name : postgres-ca-cert
mountPath : /etc/ssl/certs
readOnly : true
Worker token
With the default bootstrap configuration, worker token setup is automatic.
Bootstrap (default)
webapp :
bootstrap :
enabled : true
workerGroupName : "bootstrap"
Manual token setup
Get the token from webapp logs
kubectl logs deployment/trigger-webapp -n trigger | grep -A15 "Worker Token"
Create a Kubernetes secret
kubectl create secret generic worker-token \
--from-literal=token=tr_wgt_your_token_here \
-n trigger
Configure the supervisor
supervisor :
bootstrap :
enabled : false
workerToken :
secret :
name : "worker-token"
key : "token"
Registry setup
Use an external registry for production:
registry :
deploy : false
repositoryNamespace : "your-company"
external :
host : "your-registry.example.com"
port : 5000
auth :
enabled : true
username : "your-username"
password : "your-password"
The internal registry (registry.deploy: true) is experimental and requires TLS setup and
additional cluster configuration. Use an external registry for production.
For conceptual background on registry requirements, see the Docker registry setup .
Object storage
Use external S3-compatible storage for production. The defaults use built-in MinIO:
minio :
deploy : false
external :
url : "https://s3.amazonaws.com"
secrets :
objectStore :
accessKeyId : "admin"
secretAccessKey : "very-safe-password"
Authentication
Authentication options are identical to Docker. Set them via extraEnvVars in values.yaml:
GitHub OAuth
Resend (magic link email)
Restrict access
webapp :
extraEnvVars :
- name : AUTH_GITHUB_CLIENT_ID
value : "your-github-client-id"
- name : AUTH_GITHUB_CLIENT_SECRET
value : "your-github-client-secret"
Version locking
Lock chart and image versions for reproducible deployments:
# Pin to a specific chart version
helm upgrade -n trigger --install trigger \
oci://ghcr.io/triggerdotdev/charts/trigger \
--version "4.0.5" \
-f values-custom.yaml
# Check what app version the chart deploys
helm show chart oci://ghcr.io/triggerdotdev/charts/trigger \
--version "4.0.5" | grep appVersion
Or pin image tags directly in values.yaml:
webapp :
image :
tag : "v4.0.0"
supervisor :
image :
tag : "v4.0.0"
The chart version’s appVersion determines default image tags. Newer image tags may be
incompatible with older chart versions. Always match the CLI version to the deployed app version.
Telemetry
To disable anonymous usage telemetry:
telemetry :
enabled : false
CLI usage
See the Docker CLI usage section — the commands are identical regardless of deployment method.
CI / GitHub Actions
In CI environments, use environment variables instead of login profiles:
export TRIGGER_API_URL = https :// trigger . example . com
export TRIGGER_ACCESS_TOKEN = tr_pat_ ...
For automated CI/CD deployments, use npx trigger.dev@latest deploy with a TRIGGER_ACCESS_TOKEN set as a GitHub Actions secret. See the Deployment guide for a complete CI/CD workflow.
Troubleshooting
Check pod and service logs
# Webapp logs
kubectl logs deployment/trigger-webapp -n trigger -f
# Supervisor logs
kubectl logs deployment/trigger-supervisor -n trigger -f
# All Trigger pods
kubectl logs -l app.kubernetes.io/instance=trigger -n trigger -f
# Pod status
kubectl get pods -n trigger
kubectl describe pod < pod-nam e > -n trigger
# Uninstall the Helm release
helm uninstall trigger -n trigger
# Delete persistent volumes (WARNING: deletes all data)
kubectl delete pvc -l app.kubernetes.io/instance=trigger -n trigger
# Delete the namespace
kubectl delete namespace trigger
Magic links not working — Check webapp logs for email delivery errors and verify email transport configuration.
Deploy fails — Verify registry access and authentication on the machine running deploy.
Pods stuck pending — Run kubectl describe pod <pod-name> -n trigger and check the Events section.
Worker token issues — Check webapp and supervisor logs for token-related errors.
ERROR: schema graphile_worker does not exist — See the Docker troubleshooting section for details on resolving PostgreSQL SSL certificate issues.