Skip to main content
Trigger.dev is purpose-built for long-running AI agent workflows. Tasks can wait for minutes, hours, or days without consuming resources — making it ideal for agentic loops, human-in-the-loop flows, and parallel fan-out patterns. The patterns below are inspired by Anthropic’s research on building effective agents.

Agent fundamentals

Prompt chaining

Chain prompts together to generate and translate marketing copy automatically

Routing

Send questions to different AI models based on complexity analysis

Parallelization

Simultaneously check for inappropriate content while responding to customer inquiries

Orchestrator-workers

Coordinate multiple AI workers to verify news article accuracy

Evaluator-optimizer

Translate text and automatically improve quality through feedback loops

Core patterns

Prompt chaining

Break a complex task into a sequence of smaller LLM calls where each step feeds its output into the next. This trades latency for accuracy and lets you apply validation gates between steps.
trigger/generate-translate.ts
import { task } from "@trigger.dev/sdk";
import OpenAI from "openai";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export const generateAndTranslate = task({
  id: "generate-and-translate",
  run: async (payload: { brief: string; targetLanguage: string }) => {
    // Step 1 — generate English copy
    const generated = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [
        { role: "system", content: "You are a marketing copywriter." },
        { role: "user", content: `Write a short product description for: ${payload.brief}` },
      ],
    });

    const englishCopy = generated.choices[0].message.content ?? "";

    // Step 2 — translate the copy
    const translated = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [
        { role: "system", content: `Translate the following text to ${payload.targetLanguage}.` },
        { role: "user", content: englishCopy },
      ],
    });

    return {
      english: englishCopy,
      translated: translated.choices[0].message.content,
    };
  },
});

Parallel fan-out

Use batch.triggerAndWait to run multiple child tasks simultaneously and collect all results before continuing. This is ideal for independent checks or enrichment steps that don’t depend on each other.
trigger/parallel-checks.ts
import { task, batch } from "@trigger.dev/sdk";
import { moderateContent, generateReply } from "./subtasks";

export const respondAndCheck = task({
  id: "respond-and-check",
  run: async (payload: { customerId: string; message: string }) => {
    // Run moderation and reply generation in parallel
    const [moderationResult, replyResult] = await Promise.all([
      moderateContent.triggerAndWait({ content: payload.message }),
      generateReply.triggerAndWait({ message: payload.message }),
    ]);

    if (moderationResult.ok && moderationResult.output.flagged) {
      return { action: "blocked", reason: moderationResult.output.reason };
    }

    return {
      action: "sent",
      reply: replyResult.ok ? replyResult.output.reply : null,
    };
  },
});

Agentic loop (orchestrator-workers)

An orchestrator task breaks work into sub-tasks and dispatches them to worker tasks. The orchestrator waits for each worker before deciding next steps — enabling dynamic, multi-step workflows that adapt based on intermediate results.
trigger/orchestrator.ts
import { task } from "@trigger.dev/sdk";
import { factCheckClaim } from "./fact-checker";

export const verifyArticle = task({
  id: "verify-article",
  run: async (payload: { articleText: string; claims: string[] }) => {
    const results: Array<{ claim: string; verdict: string }> = [];

    for (const claim of payload.claims) {
      // Each claim is checked by a worker task
      const result = await factCheckClaim.triggerAndWait({ claim });

      if (result.ok) {
        results.push({ claim, verdict: result.output.verdict });
      }
    }

    const falseClaimsCount = results.filter((r) => r.verdict === "false").length;

    return {
      verified: falseClaimsCount === 0,
      results,
    };
  },
});

Human-in-the-loop

Use waitpoint tokens to pause a task indefinitely and resume it when a human provides approval or additional input. The task sleeps without consuming any resources.
trigger/review-workflow.ts
import { task, wait } from "@trigger.dev/sdk";

export const reviewSummary = task({
  id: "review-summary",
  run: async (payload: { summary: string; workflowTag: string }) => {
    // Create a token that a human must complete
    const reviewToken = await wait.createToken({
      tags: [payload.workflowTag],
      timeout: "24h",
      idempotencyKey: `review-${payload.workflowTag}`,
    });

    // Pause until the token is completed (or times out)
    const reviewResult = await wait.forToken<{ approved: boolean; feedback?: string }>(reviewToken);

    if (!reviewResult.ok) {
      throw new Error("Review timed out");
    }

    return reviewResult.output;
  },
});
To complete the token from a server action or API route:
app/actions.ts
"use server";
import { wait } from "@trigger.dev/sdk";

export async function approveReview(tokenId: string, approvedBy: string) {
  await wait.completeToken<{ approved: boolean; approvedBy: string; approvedAt: Date }>(
    { id: tokenId },
    { approved: true, approvedBy, approvedAt: new Date() }
  );
}

Evaluator-optimizer loop

Generate output with one model, evaluate the quality with a second, and feed the feedback back in until the result meets a quality threshold.
trigger/translate-and-refine.ts
import { task } from "@trigger.dev/sdk";
import OpenAI from "openai";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export const translateAndRefine = task({
  id: "translate-and-refine",
  run: async (payload: { text: string; targetLanguage: string; maxIterations?: number }) => {
    const maxIterations = payload.maxIterations ?? 3;
    let translation = "";
    let feedback = "";

    for (let i = 0; i < maxIterations; i++) {
      // Translate (or re-translate with feedback)
      const translatePrompt =
        i === 0
          ? `Translate to ${payload.targetLanguage}: ${payload.text}`
          : `Improve this translation based on feedback.\nTranslation: ${translation}\nFeedback: ${feedback}`;

      const translated = await openai.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: translatePrompt }],
      });
      translation = translated.choices[0].message.content ?? "";

      // Evaluate
      const evaluation = await openai.chat.completions.create({
        model: "gpt-4o",
        messages: [
          {
            role: "user",
            content: `Rate this translation from 1-10 and provide brief feedback.\nOriginal: ${payload.text}\nTranslation: ${translation}\nRespond as JSON: {"score": number, "feedback": string, "acceptable": boolean}`,
          },
        ],
        response_format: { type: "json_object" },
      });

      const evalResult = JSON.parse(evaluation.choices[0].message.content ?? "{}");

      if (evalResult.acceptable) {
        return { translation, iterations: i + 1, finalScore: evalResult.score };
      }

      feedback = evalResult.feedback;
    }

    return { translation, iterations: maxIterations };
  },
});

Example projects using AI agents

Claude changelog generator

Automatically generate professional changelogs from git commits using the Claude Agent SDK.

Human-in-the-loop workflow

Create audio summaries of newspaper articles with a ReactFlow UI and waitpoint token approval step.

Vercel AI SDK deep research agent

Autonomous multi-layered web research that generates comprehensive PDF reports.

Claude changelog generator

Automatically generate professional changelogs from git commits using Claude.

Human-in-the-loop workflow

Audio summaries with a human review step using ReactFlow and waitpoint tokens.

Build docs developers (and LLMs) love