Skip to main content
These instructions use Docker Compose to spin up a Trigger.dev instance. Read the self-hosting overview before continuing.
This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.

What’s new in v4

  • Much simpler setup. The provider and coordinator are now combined into a single supervisor. No startup scripts — just docker compose up.
  • Automatic container cleanup. The supervisor cleans up containers that are no longer needed.
  • Multiple worker machines. You can now scale workers horizontally as needed.
  • Resource limits enforced by default. Tasks are limited to the CPU and RAM of their machine preset, preventing noisy neighbours.
  • No direct Docker socket access. The compose file ships with Docker Socket Proxy by default.
  • Network isolation. All containers use only the network access they need.
  • Built-in registry and object storage. Deploy and run tasks without third-party services.
  • Improved CLI commands. No extra flags needed to deploy, and a new switch command for managing profiles.

Requirements

Prerequisites

Webapp machine

Hosts the dashboard, Postgres, Redis, and related services.
  • 3+ vCPU
  • 6+ GB RAM

Worker machine

Hosts the supervisor and all task runners.
  • 4+ vCPU
  • 8+ GB RAM
Worker resource requirements scale with your concurrency. For reference:
ConcurrencyMachine presetvCPU requiredRAM required
10small-1x (0.5 vCPU, 0.5 GB)55 GB
20small-1x1010 GB
100small-1x5050 GB
100small-2x (1 vCPU, 1 GB)100100 GB
You can start with one worker and add more as needed.

Setup

Webapp

1

Clone the repository

git clone --depth=1 https://github.com/triggerdotdev/trigger.dev
cd trigger.dev/hosting/docker
2

Create a .env file

cp .env.example .env
3

Start the webapp

cd webapp
docker compose up -d
4

Configure environment variables

Edit the webapp environment variables (set in your .env file) in your .env file, then apply changes:
docker compose up -d
5

Access the dashboard

The webapp is available at http://localhost:8030. Check the container logs for the magic link on first login:
docker compose logs -f webapp
6

(Optional) Initialize a project

npx trigger.dev@latest init -p <project-ref> -a http://localhost:8030

Worker

1

Clone the repository

git clone --depth=1 https://github.com/triggerdotdev/trigger.dev
cd trigger.dev/hosting/docker
2

Create a .env file

cp .env.example .env
3

Start the worker

cd worker
docker compose up -d
4

Configure the supervisor

Set the supervisor environment variables including the worker token, then apply:
docker compose up -d
Repeat the worker setup on each additional machine to scale horizontally.

Combined (single machine)

To run the webapp and worker on the same machine:
# Run from /hosting/docker
docker compose -f webapp/docker-compose.yml -f worker/docker-compose.yml up -d

Worker token

When running the combined stack, worker bootstrap is handled automatically. When running the webapp and worker separately, you must set the worker token manually. On first run, the webapp generates a token and prints it to the logs:
==========================
Trigger.dev Bootstrap - Worker Token

WARNING: This will only be shown once. Save it now!

Worker group:
bootstrap

Token:
tr_wgt_fgfAEjsTmvl4lowBLTbP7Xo563UlnVa206mr9uW6

If using docker compose, set:
TRIGGER_WORKER_TOKEN=tr_wgt_fgfAEjsTmvl4lowBLTbP7Xo563UlnVa206mr9uW6
==========================
Uncomment and set TRIGGER_WORKER_TOKEN in your .env file, then restart the worker:
# Run from /hosting/docker/worker
docker compose down
docker compose up -d

Creating additional worker groups

Use the admin API to create worker groups beyond the default bootstrap group. First make a user admin:
  • New users: set ADMIN_EMAILS (regex) before the user signs up.
  • Existing users: set admin = true in the user table.
Then create a worker group:
api_url=http://localhost:8030
wg_name=my-worker
admin_pat=tr_pat_...

curl -X POST \
  "$api_url/admin/api/v1/workers" \
  -H "Authorization: Bearer $admin_pat" \
  -H "Content-Type: application/json" \
  -d "{\"name\": \"$wg_name\"}"
The response includes a token field for newly created groups.

Registry setup

The built-in registry stores and serves deployment images.

Default settings

SettingDefault value
Registry URLlocalhost:5000
Usernameregistry-user
Passwordvery-secure-indeed
Change the registry password before deploying to production. See the registry docs for configuring native basic auth. This requires modifying ./hosting/docker/registry/auth.htpasswd.

Logging in

Each machine running the deploy command must be logged in to the registry:
docker login -u <username> <registry>
You only need to do this once per machine.

Object storage

Used for large payloads and task outputs.

Default settings

SettingDefault value
Endpointhttp://localhost:9000
Dashboardhttp://localhost:9001
Usernameadmin
Passwordvery-safe-password
The packets bucket is created automatically. If it isn’t, create it manually via the MinIO dashboard at http://localhost:9001.
Change credentials and set up a dedicated user before deploying to production.

Authentication

Magic links are the default login method. If EMAIL_TRANSPORT is not set, magic links are logged by the webapp container rather than sent by email — useful for local testing.
EMAIL_TRANSPORT=resend
FROM_EMAIL=
REPLY_TO_EMAIL=
RESEND_API_KEY=<your_resend_api_key>

GitHub OAuth

Create a GitHub OAuth app with callback URL https://<your_webapp_domain>/auth/github/callback, then set:
AUTH_GITHUB_CLIENT_ID=<your_client_id>
AUTH_GITHUB_CLIENT_SECRET=<your_client_secret>

Restricting access

Use WHITELISTED_EMAILS to restrict sign-ups to specific addresses (regex pattern). This applies to all auth methods including GitHub OAuth:
# Only these addresses can sign up
WHITELISTED_EMAILS="^(authorized@yahoo\.com|authorized@gmail\.com)$"

Version locking

Lock the Docker image version to match your CLI version:
TRIGGER_IMAGE_TAG=v4.0.0
Set this in your .env file. By default, images use the latest tag.

CLI usage

Login

Specify your self-hosted URL with the -a flag to avoid being redirected to Trigger.dev Cloud:
npx trigger.dev@latest login -a http://trigger.example.com

Profiles

Use profiles to manage multiple Trigger.dev instances:
# Login with a named profile
npx trigger.dev@latest login -a http://trigger.example.com --profile self-hosted

# Use a specific profile
npx trigger.dev@latest dev --profile self-hosted

# List all profiles
npx trigger.dev@latest list-profiles

# Switch the active profile
npx trigger.dev@latest switch self-hosted

# Remove a profile
npx trigger.dev@latest logout --profile self-hosted

Check current login

npx trigger.dev@latest whoami

CI / GitHub Actions

In CI environments, login profiles are not available. Use environment variables instead:
export TRIGGER_API_URL=https://trigger.example.com
export TRIGGER_ACCESS_TOKEN=tr_pat_...
For automated CI/CD deployments, use npx trigger.dev@latest deploy with a TRIGGER_ACCESS_TOKEN set as a GitHub Actions secret. See the Deployment guide for a complete CI/CD workflow.

Telemetry

By default, the webapp sends anonymous usage telemetry to Trigger.dev. To opt out, set TRIGGER_TELEMETRY_DISABLED on the webapp container:
services:
  webapp:
    environment:
      TRIGGER_TELEMETRY_DISABLED: 1

Troubleshooting

The machine running deploy must have registry access. See the registry setup section.
Graphile Worker migrations failed to run. Check the webapp logs for SSL certificate errors like self-signed certificate in certificate chain. This is typically caused by PostgreSQL SSL issues with an external database.Fix: mount your CA certificate and set NODE_EXTRA_CA_CERTS on the webapp and supervisor containers, then restart.
The goose migration tracker is out of sync. Exec into the webapp container, set the GOOSE env vars from the startup logs, then run:
goose reset && goose up
goose reset is destructive and will drop the entire schema. Back up your data and confirm you are in a non-production environment before running this.

Build docs developers (and LLMs) love