Skip to main content
A worker is a long-running process that pulls tasks from the Hatchet queue and executes them. Workers register one or more workflows and hold a pool of execution slots.

Creating a worker

Create a worker with hatchet.worker(), register your workflows, and then start it.
from hatchet_sdk import Context, EmptyModel, Hatchet

hatchet = Hatchet()

@hatchet.task()
def my_task(input: EmptyModel, ctx: Context) -> dict[str, str]:
    return {"result": "done"}

worker = hatchet.worker("my-worker", workflows=[my_task])
worker.start()

hatchet.worker() parameters

name
str
required
The name of the worker. The configured namespace is automatically prepended.
slots
int | None
default:"None"
Maximum number of standard (non-durable) tasks the worker can run concurrently. When None, the SDK resolves a default based on the registered workflows.
durable_slots
int | None
default:"None"
Maximum number of durable task slots. When None, the SDK resolves a default based on the registered workflows.
labels
dict[str, str | int] | None
Key-value labels attached to the worker, used for affinity-based task routing.
workflows
list[BaseWorkflow] | None
Shorthand to register workflows at creation time, equivalent to calling worker.register_workflows(workflows) after creation.
lifespan
LifespanFn | None
An async generator function that runs setup logic before the worker starts and teardown logic after it stops. See Lifespan below.

Registering workflows

Register workflows after creating the worker:
worker = hatchet.worker("my-worker")
worker.register_workflow(my_task)           # single workflow
worker.register_workflows([wf_a, wf_b])     # multiple at once
All workflows passed to hatchet.worker(workflows=[...]) are registered automatically at creation time.

Starting the worker

Synchronous start (blocking)

worker.start()
worker.start() blocks the calling thread for the lifetime of the worker. The worker creates its own event loop internally.
Do not call worker.start() from inside a running event loop. If your application already has a running loop (e.g. inside a FastAPI lifespan), use a background thread or a process instead.

Lifespan

A lifespan function is an async generator that performs setup before the first task runs and teardown after the worker shuts down. The value yielded becomes available as ctx.lifespan inside every task.
from collections.abc import AsyncGenerator
from hatchet_sdk import Context, EmptyModel, Hatchet

hatchet = Hatchet()

class AppContext:
    db_pool: object  # your database pool or other resource

async def lifespan() -> AsyncGenerator[AppContext, None]:
    # --- setup ---
    pool = await create_db_pool()
    ctx = AppContext(db_pool=pool)
    yield ctx
    # --- teardown ---
    await pool.close()

@hatchet.task()
def my_task(input: EmptyModel, ctx: Context) -> None:
    app: AppContext = ctx.lifespan
    # use app.db_pool

def main() -> None:
    worker = hatchet.worker(
        "my-worker",
        workflows=[my_task],
        lifespan=lifespan,
    )
    worker.start()

Worker status

The Worker.status property returns a WorkerStatus enum value:
ValueDescription
WorkerStatus.INITIALIZEDWorker created but not yet started.
WorkerStatus.STARTINGWorker is starting up and registering with the server.
WorkerStatus.HEALTHYWorker is running and the action-listener subprocess is alive.
WorkerStatus.UNHEALTHYThe action-listener subprocess has exited unexpectedly.
print(worker.status)  # WorkerStatus.HEALTHY

Slot types

Hatchet distinguishes two categories of worker slots:
  • Standard slots (slots) — used by regular @hatchet.task() and @workflow.task() tasks.
  • Durable slots (durable_slots) — used by @hatchet.durable_task() and @workflow.durable_task() tasks.
Durable tasks can be evicted from their slot while waiting for a condition (sleep, event) and restored later, allowing a single durable slot to handle many concurrent long-running waits.
If your worker runs only standard tasks you do not need to set durable_slots. If your worker runs only durable tasks, set slots=0 and configure durable_slots appropriately.

Build docs developers (and LLMs) love