Why Trigger.dev for AI agents
AI workloads present challenges that typical HTTP request handlers cannot handle well:- Long run times — LLM chains, multi-step agents, and document processing pipelines can take minutes or hours. Trigger.dev tasks have no inherent timeout.
- Durability — If a worker restarts mid-run, the task resumes where it left off rather than starting over.
- Concurrency control — Use queues and concurrency limits to avoid overwhelming downstream APIs.
- Retries — Transient failures (rate limits, network errors) are retried automatically with configurable backoff.
- Human-in-the-loop — Pause a run indefinitely with
wait.forToken()and resume it when a human approves or rejects. - Realtime streaming — Stream LLM output tokens back to your frontend as they are generated using
streams.pipe().
AI patterns
LLM chain
Call one or more LLMs in sequence, passing the output of each step to the next. Trigger.dev handles retries and durability across each step.Streaming LLM output
Pipe LLM output tokens to the Trigger.dev Realtime API so your frontend can display them as they arrive.useRealtimeRunWithStreams hook to receive the tokens:
Agent loop
Build an agent that calls tools in a loop until it reaches a final answer. Long-running loops are safe in Trigger.dev tasks because there is no HTTP timeout.Human-in-the-loop
Pause an agent mid-run and wait for a human to approve or reject before continuing. The task suspends and releases its compute slot while waiting — you are not charged for idle time.Parallel sub-tasks
Fan out work across multiple child tasks and collect their results. Each child task runs independently with its own retry logic.Supported frameworks
Vercel AI SDK
Use
streamText, generateText, and generateObject inside Trigger.dev tasks. Pipe the
resulting stream to Realtime with streams.pipe().OpenAI SDK
Works directly with
openai.chat.completions.create({ stream: true }). The returned
async iterable can be passed straight to streams.pipe().Anthropic SDK
Use
anthropic.messages.stream() inside a task and pipe the resulting stream
to Trigger.dev Realtime for frontend consumption.LangChain
Compose chains and agents with LangChain.js. Wrap each chain invocation in a
Trigger.dev task for durability and automatic retries.
Next steps
Realtime streaming
Stream LLM output tokens to your frontend in real-time.
React hooks
Subscribe to runs and streams directly from React components.
Wait for token
Pause a task and resume it when a human approves or an external service responds.
MCP Server
Let your AI coding assistant interact directly with your Trigger.dev projects.