Skip to main content
Each Trigger.dev task run executes inside an isolated container. The machine option controls how much compute that container is allocated. Higher-spec machines increase cost but improve performance for CPU-bound or memory-intensive tasks.

Setting a machine for a task

Pass the machine option when defining a task:
/trigger/heavy-task.ts
import { task } from "@trigger.dev/sdk";

export const heavyTask = task({
  id: "heavy-task",
  machine: "large-1x",
  run: async (payload, { ctx }) => {
    // This task runs with 4 vCPU and 8 GB RAM
  },
});

Setting the default machine

You can set a project-wide default machine in trigger.config.ts. All tasks without an explicit machine option will use this default.
trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk";

export default defineConfig({
  project: "<project ref>",
  defaultMachine: "small-2x",
});
If you don’t set a default, tasks use small-1x (0.5 vCPU, 0.5 GB RAM).

Machine presets

PresetvCPUMemoryDisk
micro0.250.25 GB10 GB
small-1x (default)0.50.5 GB10 GB
small-2x11 GB10 GB
medium-1x12 GB10 GB
medium-2x24 GB10 GB
large-1x48 GB10 GB
large-2x816 GB10 GB
View pricing for each preset on the Trigger.dev pricing page.

Overriding the machine at trigger time

You can override the machine when triggering a task — useful when the required resources depend on the payload (e.g. a large file upload or a high-data customer):
import { tasks } from "@trigger.dev/sdk";
import type { heavyTask } from "./trigger/heavy-task";

await tasks.trigger<typeof heavyTask>(
  "heavy-task",
  { message: "hello world" },
  { machine: "large-2x" }
);

Out of memory (OOM) errors

When a task run is killed due to exceeding the machine’s memory limit, you’ll see an error like:
TASK_PROCESS_OOM_KILLED. Your run was terminated due to exceeding the machine's
memory limit. Try increasing the machine preset in your task options or replay
using a larger machine.
Trigger.dev automatically detects the following OOM conditions:
  • The Node.js V8 heap limit is exceeded
  • The entire container process exceeds the machine’s memory limit
  • A child process (e.g. ffmpeg) exceeds the memory limit and exits with a non-zero code

Retrying with a larger machine

For tasks that only occasionally hit memory limits, you can configure an automatic retry on OOM with a larger machine:
/trigger/heavy-task.ts
import { task } from "@trigger.dev/sdk";

export const heavyTask = task({
  id: "heavy-task",
  machine: "medium-1x",
  retry: {
    outOfMemory: {
      machine: "large-1x",
    },
  },
  run: async (payload, { ctx }) => {
    // ...
  },
});
OOM retry only triggers on out-of-memory errors. It does not permanently change the machine for new runs — if you consistently see OOM errors, increase the machine property directly.

Throwing an explicit OOM error

If your code detects it is about to run out of memory (for example, a native library signals an allocation failure), you can throw OutOfMemoryError to trigger the same OOM handling:
/trigger/heavy-task.ts
import { task, OutOfMemoryError } from "@trigger.dev/sdk";

export const heavyTask = task({
  id: "heavy-task",
  machine: "medium-1x",
  retry: {
    outOfMemory: {
      machine: "large-1x",
    },
  },
  run: async (payload, { ctx }) => {
    const result = nativeLib.process(payload.data);

    if (result.outOfMemory) {
      throw new OutOfMemoryError();
    }

    return result;
  },
});

Monitoring memory usage

To diagnose OOM errors, add the ResourceMonitor helper to your project. It logs memory, CPU, and disk usage at a regular interval:
/trigger/tasks.ts
import { tasks } from "@trigger.dev/sdk";
import { ResourceMonitor } from "../resourceMonitor.js";

// Apply to all tasks via middleware
tasks.middleware("resource-monitor", async ({ ctx, next }) => {
  const resourceMonitor = new ResourceMonitor({ ctx });

  if (process.env.RESOURCE_MONITOR_ENABLED === "1") {
    resourceMonitor.startMonitoring(10_000); // Log every 10 seconds
  }

  await next();

  resourceMonitor.stopMonitoring();
});
To also monitor child process memory (e.g. an ffmpeg subprocess):
const resourceMonitor = new ResourceMonitor({
  ctx,
  processName: "ffmpeg",
});
The ResourceMonitor is a helper class you add to your own project. Copy the implementation from the Trigger.dev examples repository or write a simplified version using Node.js process.memoryUsage().
/src/resourceMonitor.ts
import os from "node:os";
import { getHeapStatistics } from "node:v8";
import { type Context, logger } from "@trigger.dev/sdk";

export class ResourceMonitor {
  private logInterval: NodeJS.Timeout | null = null;
  private ctx: Context;

  constructor({ ctx }: { ctx: Context; processName?: string }) {
    this.ctx = ctx;
  }

  startMonitoring(intervalMs = 10_000) {
    this.logInterval = setInterval(() => {
      const heap = getHeapStatistics();
      const mem = process.memoryUsage();
      const totalMem = os.totalmem();
      const freeMem = os.freemem();

      logger.info("Resource monitor", {
        heap: {
          used: `${(heap.used_heap_size / 1024 / 1024).toFixed(1)} MB`,
          total: `${(heap.heap_size_limit / 1024 / 1024).toFixed(1)} MB`,
          percent: `${((heap.used_heap_size / heap.heap_size_limit) * 100).toFixed(1)}%`,
        },
        rss: `${(mem.rss / 1024 / 1024).toFixed(1)} MB`,
        systemMemory: {
          free: `${(freeMem / 1024 / 1024).toFixed(1)} MB`,
          total: `${(totalMem / 1024 / 1024).toFixed(1)} MB`,
        },
      });
    }, intervalMs);
  }

  stopMonitoring() {
    if (this.logInterval) {
      clearInterval(this.logInterval);
      this.logInterval = null;
    }
  }
}

Build docs developers (and LLMs) love