runs — all task run data including status, timing, costs, and task output.
metrics — metrics data for your runs including CPU, memory, and custom metrics.
Run metadata (key-value pairs set in your task via metadata.set()) is not available in the
Query tables. Use runs.retrieve() or the dashboard to inspect metadata.
Use prettyFormat() to format metric values for display:
SELECT timeBucket(), prettyFormat(avg(value), 'bytes') AS avg_memoryFROM metricsWHERE metric_name = 'process.memory.usage'GROUP BY timeBucketORDER BY timeBucketLIMIT 1000
Available format types: bytes, percent, duration, durationSeconds, quantity, costInDollars.
Use query.execute() to run TRQL queries programmatically:
import { query } from "@trigger.dev/sdk";// Basic query — defaults to environment scope and JSON formatconst result = await query.execute("SELECT run_id, status FROM runs LIMIT 10");console.log(result.results); // Array<Record<string, any>>
import { query } from "@trigger.dev/sdk";const result = await query.execute("SELECT COUNT(*) as count FROM runs", { // Scope: "environment" (default), "project", or "organization" scope: "project", // Time period shorthand (e.g., "1h", "7d", "30d") period: "7d", // Or use explicit date range: // from: new Date("2024-01-01"), // to: new Date("2024-01-31"), // Response format: "json" (default) or "csv" format: "json",});
SELECT task_identifier, count() AS total_runs, avg(usage_duration) AS avg_duration_ms, median(usage_duration) AS median_duration_ms, quantile(0.95)(usage_duration) AS p95_duration_msFROM runsGROUP BY task_identifier
-- Auto-bucket by query time rangeSELECT timeBucket(), count() AS runsFROM runsGROUP BY timeBucket()-- Date extractionSELECT toYear(triggered_at), toMonth(triggered_at), toHour(triggered_at)FROM runs-- Truncate to start of periodSELECT toStartOfDay(triggered_at) AS day, count() AS runs_per_dayFROM runsGROUP BY day-- Date arithmeticSELECT dateDiff('minute', executed_at, completed_at) AS duration_minutesFROM runsWHERE completed_at IS NOT NULL
The output, error, and metrics.attributes columns are already JSON. Use dot notation to access nested fields — no JSONExtract* needed:
SELECT run_id, output.message AS output_message, output.count AS countFROM runsWHERE task_identifier = 'my-task' AND output.externalId = 'something'ORDER BY triggered_at DESCLIMIT 100
Avoid filtering by triggered_at directly in your query. The dashboard, SDK, and API all support
a time filter applied automatically — use period or from/to instead so your queries can be
reused across different periods.
// Period shorthandawait query.execute("SELECT count() FROM runs", { period: "7d" });// Explicit date rangeawait query.execute("SELECT count() FROM runs", { from: new Date("2024-01-01"), to: new Date("2024-01-31"),});// Unix timestampsawait query.execute("SELECT count() FROM runs", { from: Date.now() - 7 * 24 * 60 * 60 * 1000, to: Date.now(),});
Supported period shorthands: 1h, 6h, 12h, 1d, 4d, 7d, 30d, 90d, etc.
SELECT task_identifier, run_id, error, triggered_atFROM runsWHERE status = 'Failed'ORDER BY triggered_at DESC
Set the time filter to 1d in the dashboard or period: "1d" in the SDK.
Task success rate by day
SELECT toDate(triggered_at) AS day, task_identifier, countIf(status = 'Completed') AS completed, countIf(status = 'Failed') AS failed, round(completed / (completed + failed) * 100, 2) AS success_rate_pctFROM runsWHERE status IN ('Completed', 'Failed')GROUP BY day, task_identifierORDER BY day DESC, task_identifier
SELECT timeBucket() AS time, task_identifier, avg(usage_duration) AS avg_duration_ms, count() AS run_countFROM runsWHERE usage_duration IS NOT NULLGROUP BY time, task_identifierORDER BY time ASC
CPU utilization over time
SELECT timeBucket(), avg(value) AS avg_cpuFROM metricsWHERE metric_name = 'process.cpu.utilization'GROUP BY timeBucketORDER BY timeBucketLIMIT 1000
Memory usage by task (past 7 days)
SELECT task_identifier, avg(value) AS avg_memoryFROM metricsWHERE metric_name = 'process.memory.usage'GROUP BY task_identifierORDER BY avg_memory DESCLIMIT 20
Runs by queue and machine
SELECT queue, machine, count() AS run_count, countIf(status = 'Completed') AS completed, countIf(status = 'Failed') AS failedFROM runsGROUP BY queue, machineORDER BY queue, machine