Skip to main content
Query lets you write custom analytical queries against your Trigger.dev data. You can execute queries through the dashboard, the SDK, or the REST API.

Available tables

  • runs — all task run data including status, timing, costs, and task output.
  • metrics — metrics data for your runs including CPU, memory, and custom metrics.
Run metadata (key-value pairs set in your task via metadata.set()) is not available in the Query tables. Use runs.retrieve() or the dashboard to inspect metadata.

runs table columns

ColumnTypeDescription
run_idstringUnique run identifier
task_identifierstringTask slug
statusstringCompleted, Failed, Crashed, Canceled, etc.
triggered_atdatetimeWhen the run was triggered
executed_atdatetimeWhen the run started executing
completed_atdatetimeWhen the run completed
usage_durationnumberExecution duration in milliseconds
compute_costnumberCompute cost in dollars
attempt_countnumberNumber of attempts
queuestringQueue name
machinestringMachine preset used
tagsarrayArray of tag strings
outputjsonTask return value (dot-notation access)
errorjsonError details for failed runs
environment_typestringPRODUCTION, STAGING, DEVELOPMENT, PREVIEW

metrics table columns

ColumnTypeDescription
metric_namestringMetric identifier (e.g., process.cpu.utilization)
metric_typestringgauge, sum, or histogram
valuenumberThe observed value
bucket_startdatetime10-second aggregation bucket start time
run_idstringAssociated run ID
task_identifierstringTask slug
attempt_numbernumberAttempt number
machine_idstringMachine that produced the metric
machine_namestringMachine preset (e.g., small-1x)
worker_versionstringWorker version
environment_typestringPRODUCTION, STAGING, DEVELOPMENT, PREVIEW
attributesjsonRaw JSON attributes for custom data
See Logging, tracing & metrics for the full list of automatically collected metrics and how to create custom metrics.

prettyFormat()

Use prettyFormat() to format metric values for display:
SELECT
  timeBucket(),
  prettyFormat(avg(value), 'bytes') AS avg_memory
FROM metrics
WHERE metric_name = 'process.memory.usage'
GROUP BY timeBucket
ORDER BY timeBucket
LIMIT 1000
Available format types: bytes, percent, duration, durationSeconds, quantity, costInDollars.

Using the Query dashboard

Navigate to the Query page in the dashboard to write and run queries. Features include:
  • AI-powered query generation — describe what you want in natural language
  • Syntax highlighting — SQL syntax highlighting for readability
  • Query history — access your previous queries
  • Interactive help — built-in documentation for TRQL syntax and functions
  • Export options — download results as JSON or CSV

Querying from the SDK

Use query.execute() to run TRQL queries programmatically:
import { query } from "@trigger.dev/sdk";

// Basic query — defaults to environment scope and JSON format
const result = await query.execute("SELECT run_id, status FROM runs LIMIT 10");
console.log(result.results); // Array<Record<string, any>>

Type-safe queries

Use the QueryTable type for inferred result types:
import { query, type QueryTable } from "@trigger.dev/sdk";

const typedResult = await query.execute<QueryTable<"runs", "run_id" | "status" | "triggered_at">>(
  "SELECT run_id, status, triggered_at FROM runs LIMIT 10"
);

typedResult.results.forEach((row) => {
  console.log(row.run_id, row.status); // fully typed
});

Query options

import { query } from "@trigger.dev/sdk";

const result = await query.execute("SELECT COUNT(*) as count FROM runs", {
  // Scope: "environment" (default), "project", or "organization"
  scope: "project",

  // Time period shorthand (e.g., "1h", "7d", "30d")
  period: "7d",

  // Or use explicit date range:
  // from: new Date("2024-01-01"),
  // to: new Date("2024-01-31"),

  // Response format: "json" (default) or "csv"
  format: "json",
});

CSV export

const csvResult = await query.execute("SELECT run_id, status, triggered_at FROM runs", {
  format: "csv",
  period: "7d",
});

const lines = csvResult.results.split("\n");
console.log(lines[0]); // CSV header row

Querying from the REST API

Execute queries via HTTP POST to /api/v1/query:
curl -X POST https://api.trigger.dev/api/v1/query \
  -H "Authorization: Bearer YOUR_SECRET_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "SELECT run_id, status FROM runs LIMIT 10",
    "scope": "environment",
    "period": "7d",
    "format": "json"
  }'
See the API reference for full details.

TRQL syntax guide

Basic SELECT

SELECT run_id, task_identifier, status
FROM runs
LIMIT 10
Alias columns with AS:
SELECT task_identifier AS task, count() AS total
FROM runs
GROUP BY task
Using SELECT * only returns a subset of core columns for performance reasons. Specify the columns you need explicitly.

Filtering with WHERE

-- Comparison operators
WHERE status = 'Failed'
WHERE status != 'Completed'
WHERE attempt_count > 3
WHERE attempt_count <= 5

-- Multiple values
WHERE status IN ('Failed', 'Crashed')

-- Pattern matching
WHERE task_identifier LIKE 'email%'
WHERE task_identifier ILIKE '%send%'

-- Date ranges
WHERE triggered_at BETWEEN '2024-01-01' AND '2024-01-31'

-- NULL checks
WHERE completed_at IS NOT NULL

-- Array column checks (e.g., tags)
WHERE has(tags, 'user_12345')
WHERE hasAny(tags, array('user_12345', 'user_67890'))
WHERE hasAll(tags, array('user_12345', 'user_67890'))

Grouping and aggregation

SELECT
  task_identifier,
  count() AS total_runs,
  avg(usage_duration) AS avg_duration_ms,
  median(usage_duration) AS median_duration_ms,
  quantile(0.95)(usage_duration) AS p95_duration_ms
FROM runs
GROUP BY task_identifier

Sorting and limiting

SELECT run_id, compute_cost, triggered_at
FROM runs
ORDER BY compute_cost DESC, triggered_at ASC
LIMIT 50

Available functions

Aggregate functions

FunctionDescription
count()Count rows
countIf(col, cond)Count rows matching condition
countDistinct(col)Count unique values
sum(col)Sum of values
avg(col)Average of values
min(col) / max(col)Min/max value
median(col)Median (50th percentile)
quantile(p)(col)Value at percentile p (0–1)
stddevPop(col)Population standard deviation

Date/time functions

-- Auto-bucket by query time range
SELECT timeBucket(), count() AS runs
FROM runs
GROUP BY timeBucket()

-- Date extraction
SELECT toYear(triggered_at), toMonth(triggered_at), toHour(triggered_at)
FROM runs

-- Truncate to start of period
SELECT toStartOfDay(triggered_at) AS day, count() AS runs_per_day
FROM runs
GROUP BY day

-- Date arithmetic
SELECT dateDiff('minute', executed_at, completed_at) AS duration_minutes
FROM runs
WHERE completed_at IS NOT NULL

String functions

SELECT
  lower(status) AS status_lower,
  concat(task_identifier, '-', status) AS combined,
  substring(run_id, 1, 8) AS short_id
FROM runs

Conditional functions

SELECT
  run_id,
  if(status = 'Failed', 1, 0) AS is_failed,
  multiIf(
    status = 'Completed', 'ok',
    status = 'Failed', 'bad',
    'other'
  ) AS status_category,
  coalesce(completed_at, triggered_at) AS end_time
FROM runs

JSON functions

The output, error, and metrics.attributes columns are already JSON. Use dot notation to access nested fields — no JSONExtract* needed:
SELECT
  run_id,
  output.message AS output_message,
  output.count AS count
FROM runs
WHERE task_identifier = 'my-task'
  AND output.externalId = 'something'
ORDER BY triggered_at DESC
LIMIT 100

Query scopes

ScopeDescription
environment (default)Runs in the current environment only
projectRuns across all environments in the project
organizationRuns across all projects in the organization
const result = await query.execute(
  "SELECT environment_type, count() FROM runs GROUP BY environment_type",
  { scope: "project" }
);

Time ranges

Avoid filtering by triggered_at directly in your query. The dashboard, SDK, and API all support a time filter applied automatically — use period or from/to instead so your queries can be reused across different periods.
// Period shorthand
await query.execute("SELECT count() FROM runs", { period: "7d" });

// Explicit date range
await query.execute("SELECT count() FROM runs", {
  from: new Date("2024-01-01"),
  to: new Date("2024-01-31"),
});

// Unix timestamps
await query.execute("SELECT count() FROM runs", {
  from: Date.now() - 7 * 24 * 60 * 60 * 1000,
  to: Date.now(),
});
Supported period shorthands: 1h, 6h, 12h, 1d, 4d, 7d, 30d, 90d, etc.

Example queries

SELECT
  task_identifier,
  run_id,
  error,
  triggered_at
FROM runs
WHERE status = 'Failed'
ORDER BY triggered_at DESC
Set the time filter to 1d in the dashboard or period: "1d" in the SDK.
SELECT
  toDate(triggered_at) AS day,
  task_identifier,
  countIf(status = 'Completed') AS completed,
  countIf(status = 'Failed') AS failed,
  round(completed / (completed + failed) * 100, 2) AS success_rate_pct
FROM runs
WHERE status IN ('Completed', 'Failed')
GROUP BY day, task_identifier
ORDER BY day DESC, task_identifier
SELECT
  run_id,
  task_identifier,
  compute_cost,
  usage_duration,
  triggered_at
FROM runs
WHERE compute_cost > 0
ORDER BY compute_cost DESC
LIMIT 10
SELECT
  timeBucket() AS time,
  task_identifier,
  avg(usage_duration) AS avg_duration_ms,
  count() AS run_count
FROM runs
WHERE usage_duration IS NOT NULL
GROUP BY time, task_identifier
ORDER BY time ASC
SELECT
  timeBucket(),
  avg(value) AS avg_cpu
FROM metrics
WHERE metric_name = 'process.cpu.utilization'
GROUP BY timeBucket
ORDER BY timeBucket
LIMIT 1000
SELECT
  task_identifier,
  avg(value) AS avg_memory
FROM metrics
WHERE metric_name = 'process.memory.usage'
GROUP BY task_identifier
ORDER BY avg_memory DESC
LIMIT 20
SELECT
  queue,
  machine,
  count() AS run_count,
  countIf(status = 'Completed') AS completed,
  countIf(status = 'Failed') AS failed
FROM runs
GROUP BY queue, machine
ORDER BY queue, machine

Best practices

  1. Use the built-in time filter — set period or from/to rather than filtering triggered_at in the query body.
  2. Always add LIMIT — reduce returned rows when you don’t need everything.
  3. Prefer approximate aggregations for large datasets — use uniq() instead of uniqExact() for faster approximate counts.

Limits

LimitDescription
ConcurrencyLimited concurrent queries per organization
Row limitMax 10,000 rows returned
Time periodRestricted historical range
Memory / timeQueries are subject to memory and execution time limits
Check the Trigger.dev dashboard for current quota details.

Build docs developers (and LLMs) love