Skip to main content
A worker is a long-running process that connects to the Hatchet server and listens for tasks to execute. Workers pull work from the queue, run the registered task functions, and report results back. You can run as many workers as you need — across any number of machines — and Hatchet distributes tasks across them automatically.

Creating a worker

Call hatchet.worker() to create a worker. Pass your tasks or workflows in the workflows list, then call worker.start() to begin processing.
worker.py
from hatchet_sdk import Hatchet
from my_workflows import my_task

hatchet = Hatchet()

def main() -> None:
    worker = hatchet.worker(
        "my-worker",
        slots=100,
        workflows=[my_task],
    )
    worker.start()

if __name__ == "__main__":
    main()

Worker parameters

name
string
required
A unique name for this worker. Used in the Hatchet UI and logs to identify which worker handled a given task run.
slots
int
The maximum number of standard task runs the worker may execute concurrently. When all slots are occupied, the worker stops accepting new tasks until a slot is freed.Defaults to 100 in TypeScript and Go. In Python, the default is derived from the registered workflows; if no workflows require a specific slot count, a default of 100 is used.
durable_slots
int
The maximum number of durable task runs the worker may hold concurrently. Durable tasks have their own separate slot pool that does not compete with standard task slots.Defaults to 1000 in TypeScript and Go. In Python, the default is derived from whether any registered workflows contain durable tasks.
labels
dict[str, str | int]
A dictionary of key/value metadata attached to the worker at startup. Labels are used by worker affinity rules to route tasks to workers that have specific capabilities (for example, a particular ML model loaded into memory).See Worker affinity for details.
workflows
list
A list of workflows or standalone tasks to register with the worker. The worker will only accept task runs for the actions it has registered. In Python you can also call worker.register_workflow() or worker.register_workflows() after construction.
lifespan
AsyncGenerator function
An async generator function that runs alongside the worker. Code before the yield executes during worker startup; code after the yield runs when the worker shuts down. The yielded value is available to all tasks via ctx.lifespan.See Lifespan below.

Worker slots

Slots control how many tasks a worker can run at the same time. When a worker has no free slots, the Hatchet server will not dispatch new tasks to it — those tasks queue until a slot opens or another worker picks them up.
worker = hatchet.worker("my-worker", slots=50)

Durable slots

Durable tasks use a separate slot pool so that long-running durable tasks waiting on events or sleeping do not starve regular tasks of capacity. Configure durable_slots independently of slots.
worker = hatchet.worker(
    "my-worker",
    slots=100,
    durable_slots=500,
)
If you register durable tasks on a worker but do not set durable_slots, Hatchet will automatically allocate a default durable slot pool.

Registering workflows and tasks

You can register workflows and tasks when creating the worker or after construction.
worker.py
from hatchet_sdk import Hatchet
from my_workflows import workflow_a, workflow_b, task_c

hatchet = Hatchet()

# Register at construction time
worker = hatchet.worker(
    "my-worker",
    workflows=[workflow_a, workflow_b],
)

# Or register afterwards
worker.register_workflow(task_c)

Lifespan

The lifespan parameter lets you run setup and teardown logic that wraps the entire lifetime of the worker. This is useful for initializing shared resources — database connection pools, ML models, HTTP clients — and making them available to all tasks without re-initializing on every run. The lifespan function is an async generator. Yield the shared context object; tasks access it via ctx.lifespan.
worker.py
from collections.abc import AsyncGenerator
from typing import cast

from psycopg_pool import ConnectionPool
from pydantic import BaseModel, ConfigDict

from hatchet_sdk import Context, EmptyModel, Hatchet

hatchet = Hatchet()


class AppContext(BaseModel):
    model_config = ConfigDict(arbitrary_types_allowed=True)

    pool: ConnectionPool


async def lifespan() -> AsyncGenerator[AppContext, None]:
    # Runs on startup
    with ConnectionPool("postgres://user:pass@localhost/mydb") as pool:
        yield AppContext(pool=pool)
    # Runs on shutdown


my_workflow = hatchet.workflow(name="MyWorkflow")


@my_workflow.task()
def my_task(input: EmptyModel, ctx: Context) -> dict:
    pool = cast(AppContext, ctx.lifespan).pool
    with pool.connection() as conn:
        row = conn.execute("SELECT 1").fetchone()
    return {"result": row[0]}


worker = hatchet.worker(
    "my-worker",
    slots=1,
    workflows=[my_workflow],
    lifespan=lifespan,
)

if __name__ == "__main__":
    worker.start()
If the lifespan setup raises an exception, the worker will not start. Ensure your setup code handles errors gracefully.

Starting the worker

worker.start() creates a new event loop and blocks until the worker is stopped by a signal or an error. Call it from your main entry point.
worker.start()
The worker handles SIGTERM and SIGINT gracefully by default — it stops accepting new tasks, waits for in-flight tasks to complete, runs lifespan teardown, then exits.

Worker health checks

Hatchet workers expose an optional HTTP health check endpoint. Enable it through your client configuration:
from hatchet_sdk.config import ClientConfig, HealthCheckConfig

config = ClientConfig(
    healthcheck=HealthCheckConfig(
        enabled=True,
        port=8001,
    )
)
The worker sets its internal status to HEALTHY once the action listener subprocess is confirmed running, and to UNHEALTHY if the subprocess exits unexpectedly. When UNHEALTHY, the worker initiates a graceful shutdown.

Next steps

Task routing

Control which workers execute which tasks using sticky assignment and priority.

Worker affinity

Route tasks to workers with specific capabilities using labels.

Build docs developers (and LLMs) love