Monitor tasks, workflows, and workers in real time using the Hatchet dashboard.
The Hatchet dashboard gives you a real-time view of everything happening in your system: task runs, workflow runs, active workers, and queue metrics. You can filter, inspect, cancel, and replay runs directly from the browser — or programmatically via the hatchet.runs and hatchet.metrics clients.
For multi-task workflows (DAGs), the run detail page renders the full DAG as a timeline. Each task appears as a node in the graph, colored by status. You can see which tasks ran in parallel, how long each took, and which tasks were skipped due to conditions.Click a task node to jump directly to its log output, input, and output payload.
You can query and manage runs from your own code using the hatchet.runs client. This is useful for building scripts, dashboards, or automation on top of the same data you see in the UI.
from datetime import datetime, timedelta, timezonefrom hatchet_sdk import Hatchetfrom hatchet_sdk.clients.rest.models.v1_task_status import V1TaskStatushatchet = Hatchet()# List runs from the past hourruns = hatchet.runs.list( since=datetime.now(timezone.utc) - timedelta(hours=1), statuses=[V1TaskStatus.FAILED],)for run in runs.rows: print(run.metadata.id, run.status)
For date ranges longer than 7 days, use list_with_pagination to avoid performance issues:
# Get workflow run details (includes task-level breakdown)details = hatchet.runs.get(workflow_run_id)# Get a specific task runtask = hatchet.runs.get_task_run(task_run_id)# Get just the status of a workflow runstatus = hatchet.runs.get_status(workflow_run_id)
from hatchet_sdk.features.runs import BulkCancelReplayOpts, RunFilterfrom hatchet_sdk.clients.rest.models.v1_task_status import V1TaskStatus# Cancel a single runhatchet.runs.cancel(run_id)# Bulk cancel by IDshatchet.runs.bulk_cancel( opts=BulkCancelReplayOpts(ids=[run_id_1, run_id_2]))# Bulk cancel by filter (running tasks in the past hour)hatchet.runs.bulk_cancel( opts=BulkCancelReplayOpts( filters=RunFilter( since=datetime.now(timezone.utc) - timedelta(hours=1), statuses=[V1TaskStatus.RUNNING], ) ))
# Replay a single runhatchet.runs.replay(run_id)# Bulk replay failed runs from the past dayhatchet.runs.bulk_replay_by_filters_with_pagination( since=datetime.now(timezone.utc) - timedelta(days=1), statuses=[V1TaskStatus.FAILED],)
The hatchet.metrics client exposes the same metrics shown in the dashboard.
Python
Go
from datetime import datetime, timedelta, timezonefrom hatchet_sdk import Hatchethatchet = Hatchet()# Queue depth per queuequeues = hatchet.metrics.get_queue_metrics()print(queues)# Task counts by status (past 24 hours by default)task_metrics = hatchet.metrics.get_task_metrics()print(task_metrics.queued, task_metrics.running, task_metrics.failed)# Scoped to a time range or specific workflowstask_metrics = hatchet.metrics.get_task_metrics( since=datetime.now(timezone.utc) - timedelta(hours=6), workflow_ids=["my-workflow-id"],)# Raw Prometheus metrics for scrapingprometheus_text = hatchet.metrics.scrape_tenant_prometheus_metrics()
All methods have async equivalents prefixed with aio_: