What is a Task List?
A task list is a dynamically allocated queue that routes tasks to workers. Task lists are the primary mechanism for:
Routing workflow and activity tasks to workers
Load balancing across multiple worker instances
Prioritization of different types of work
Isolation of processing pools
Task lists in Cadence are virtual queues - they don’t require pre-configuration and are created automatically when first used. Think of them as lightweight routing labels rather than traditional message queues.
Why Task Lists Matter
Task lists provide several critical capabilities:
Dynamic Routing : Route work to specific worker pools
Horizontal Scaling : Add workers polling the same task list
Specialization : Different task lists for different hardware or capabilities
Priority Management : High-priority vs low-priority task lists
Environment Isolation : Separate task lists for dev, staging, prod
Rate Limiting : Control throughput per task list
Task List Structure
TaskList Type
type TaskList struct {
Name string `json:"name,omitempty"`
Kind * TaskListKind `json:"kind,omitempty"`
}
Name : Identifier for the task list (e.g., “order-processing”)
Kind : Type of task list (NORMAL, STICKY, or EPHEMERAL)
TaskListKind
type TaskListKind int32
const (
TaskListKindNormal TaskListKind = iota // Standard task list
TaskListKindSticky // Sticky for workflow caching
TaskListKindEphemeral // Short-lived task list
)
TaskListType
type TaskListType int32
const (
TaskListTypeDecision TaskListType = iota // Decision tasks (workflows)
TaskListTypeActivity // Activity tasks
)
Each task list actually has two queues:
Decision task queue (for workflow tasks)
Activity task queue (for activity tasks)
How Task Lists Work Internally
Task Matching Flow
Matching Service
The matching service manages task list operations:
type AddActivityTaskRequest struct {
DomainUUID string
Execution * WorkflowExecution
SourceDomainUUID string
TaskList * TaskList
ScheduleID int64
ScheduleToStartTimeoutSeconds * int32
Source * TaskSource
ForwardedFrom string
}
type AddDecisionTaskRequest struct {
DomainUUID string
Execution * WorkflowExecution
TaskList * TaskList
ScheduleID int64
ScheduleToStartTimeoutSeconds * int32
}
Task List Partitioning
Task lists can be partitioned for scalability:
type TaskListPartitionMetadata struct {
Key string // Partition identifier
OwnerHostName string // Host owning this partition
}
type TaskListPartitionConfig struct {
Version int64
NumReadPartitions int32
NumWritePartitions int32
}
Code Examples
Go - Start Workflow with Task List
Go - Worker Polling Task List
Go - Activity with Specific Task List
Java
package main
import (
" context "
" go.uber.org/cadence/client "
)
func main () {
c , _ := client . NewClient ( client . Options {
HostPort : "localhost:7933" ,
Domain : "my-domain" ,
})
// Start workflow on specific task list
workflowOptions := client . StartWorkflowOptions {
ID : "order-12345" ,
TaskList : "order-processing" , // Task list name
ExecutionStartToCloseTimeout : time . Hour ,
TaskStartToCloseTimeout : time . Minute ,
}
we , err := c . StartWorkflow ( context . Background (), workflowOptions , OrderWorkflow , "order-12345" )
if err != nil {
panic ( err )
}
fmt . Printf ( "Started workflow: %s \n " , we . ID )
}
func main () {
c , _ := client . NewClient ( client . Options {
Domain : "my-domain" ,
})
// Create worker for specific task list
w := worker . New ( c , "order-processing" , worker . Options {
MaxConcurrentActivityExecutionSize : 100 ,
MaxConcurrentDecisionTaskExecutionSize : 50 ,
})
// Register workflows and activities
w . RegisterWorkflow ( OrderWorkflow )
w . RegisterActivity ( ProcessPayment )
// Start polling
err := w . Start ()
if err != nil {
panic ( err )
}
select {} // Keep running
}
func MyWorkflow ( ctx workflow . Context , input string ) error {
// Default task list for most activities
ao := workflow . ActivityOptions {
TaskList : "default-activities" ,
ScheduleToStartTimeout : time . Minute ,
StartToCloseTimeout : time . Minute * 5 ,
}
ctx = workflow . WithActivityOptions ( ctx , ao )
// Execute activity on default task list
err := workflow . ExecuteActivity ( ctx , StandardActivity , input ). Get ( ctx , nil )
if err != nil {
return err
}
// GPU-intensive activity on specialized task list
gpuOptions := workflow . ActivityOptions {
TaskList : "gpu-workers" ,
ScheduleToStartTimeout : time . Minute * 5 ,
StartToCloseTimeout : time . Hour ,
}
gpuCtx := workflow . WithActivityOptions ( ctx , gpuOptions )
err = workflow . ExecuteActivity ( gpuCtx , TrainMLModel , input ). Get ( gpuCtx , nil )
if err != nil {
return err
}
return nil
}
import com.uber.cadence.client.WorkflowClient;
import com.uber.cadence.client.WorkflowOptions;
import com.uber.cadence.worker.Worker;
import com.uber.cadence.worker.WorkerFactory;
public class TaskListExample {
public static void main ( String [] args ) {
WorkflowClient client = WorkflowClient . newInstance (
"localhost" , 7933 , "my-domain"
);
// Start workflow on task list
WorkflowOptions options = new WorkflowOptions. Builder ()
. setTaskList ( "order-processing" )
. setExecutionStartToCloseTimeout ( Duration . ofHours ( 1 ))
. build ();
OrderWorkflow workflow = client . newWorkflowStub (
OrderWorkflow . class , options
);
// Start workflow
WorkflowExecution execution = WorkflowClient . start (workflow :: processOrder, "order-12345" );
// Create worker for task list
WorkerFactory factory = WorkerFactory . newInstance (client);
Worker worker = factory . newWorker ( "order-processing" );
worker . registerWorkflowImplementationTypes ( OrderWorkflowImpl . class );
worker . registerActivitiesImplementations ( new PaymentActivitiesImpl ());
factory . start ();
}
}
Task List Patterns
1. Dedicated Task Lists by Type
// CPU-intensive tasks
cpuWorker := worker . New ( c , "cpu-tasks" , worker . Options {
MaxConcurrentActivityExecutionSize : 100 ,
})
cpuWorker . RegisterActivity ( DataProcessing )
cpuWorker . RegisterActivity ( ReportGeneration )
// GPU tasks
gpuWorker := worker . New ( c , "gpu-tasks" , worker . Options {
MaxConcurrentActivityExecutionSize : 4 , // Limited GPU count
})
gpuWorker . RegisterActivity ( MLTraining )
gpuWorker . RegisterActivity ( VideoProcessing )
// I/O intensive tasks
ioWorker := worker . New ( c , "io-tasks" , worker . Options {
MaxConcurrentActivityExecutionSize : 1000 ,
})
ioWorker . RegisterActivity ( FileUpload )
ioWorker . RegisterActivity ( DatabaseQuery )
2. Priority Task Lists
func routeByPriority ( ctx workflow . Context , order Order ) error {
var taskList string
if order . Priority == "high" {
taskList = "high-priority-orders"
} else if order . Priority == "medium" {
taskList = "medium-priority-orders"
} else {
taskList = "low-priority-orders"
}
ao := workflow . ActivityOptions {
TaskList : taskList ,
ScheduleToStartTimeout : time . Minute ,
StartToCloseTimeout : time . Minute * 5 ,
}
ctx = workflow . WithActivityOptions ( ctx , ao )
return workflow . ExecuteActivity ( ctx , ProcessOrder , order ). Get ( ctx , nil )
}
3. Environment-Specific Task Lists
// Development workers
devWorker := worker . New ( c , "dev-tasks" , worker . Options {})
devWorker . RegisterWorkflow ( ExperimentalWorkflow )
devWorker . Start ()
// Staging workers
stagingWorker := worker . New ( c , "staging-tasks" , worker . Options {})
stagingWorker . RegisterWorkflow ( TestWorkflow )
stagingWorker . Start ()
// Production workers
prodWorker := worker . New ( c , "prod-tasks" , worker . Options {})
prodWorker . RegisterWorkflow ( ProductionWorkflow )
prodWorker . Start ()
4. Dynamic Task List Selection
func DynamicWorkflow ( ctx workflow . Context , input Input ) error {
// Select task list based on input
taskList := selectTaskList ( input )
ao := workflow . ActivityOptions {
TaskList : taskList ,
ScheduleToStartTimeout : time . Minute ,
StartToCloseTimeout : time . Minute * 10 ,
}
ctx = workflow . WithActivityOptions ( ctx , ao )
return workflow . ExecuteActivity ( ctx , ProcessData , input ). Get ( ctx , nil )
}
func selectTaskList ( input Input ) string {
if input . Size > 1000000 {
return "large-data-processing"
} else if input . RequiresGPU {
return "gpu-workers"
} else {
return "default-workers"
}
}
Sticky Task Lists
Sticky task lists optimize workflow execution by caching state on workers:
workerOptions := worker . Options {
// Enable sticky execution
StickyScheduleToStartTimeout : time . Second * 5 ,
}
w := worker . New ( c , "my-tasks" , workerOptions )
How Sticky Execution Works:
Worker completes a decision task
Workflow state is cached on the worker
Next decision task is routed to same worker (if available)
Worker reuses cached state (no replay needed)
If timeout occurs, task goes to normal task list
Benefits:
Reduced latency (no replay)
Lower load on history service
Better resource utilization
Sticky Task List Naming : Sticky task lists are named automatically:
__sticky__<original-task-list>_<worker-id>
type TaskListMetadata struct {
MaxTasksPerSecond * float64 // Rate limit
}
type TaskListStatus struct {
BacklogCountHint int64 // Approximate queue depth
ReadLevel int64 // Read position
AckLevel int64 // Acknowledged position
RatePerSecond float64 // Current throughput
TaskIDBlock * TaskIDBlock // Task ID allocation
}
Best Practices
1. Naming Conventions
Good Examples
Bad Examples
order-processing
order-processing-high-priority
gpu-ml-training
payment-validation
notification-sender
2. Task List Sizing
// Small task list - few workers
smallWorker := worker . New ( c , "specialized-tasks" , worker . Options {
MaxConcurrentActivityExecutionSize : 10 ,
})
// Large task list - many workers
largeWorker := worker . New ( c , "common-tasks" , worker . Options {
MaxConcurrentActivityExecutionSize : 1000 ,
})
Task List Capacity : Task lists scale automatically, but individual workers have limits. Add more workers for higher throughput.
3. Avoid Task List Overload
// Monitor task list backlog
resp , err := client . DescribeTaskList ( ctx , & shared . DescribeTaskListRequest {
Domain : domain ,
TaskList : & shared . TaskList { Name : taskListName },
TaskListType : shared . TaskListTypeActivity . Ptr (),
})
if resp . GetBacklogCountHint () > 10000 {
// Alert: Task list backlog is high
// Action: Add more workers
}
4. Use Child Workflows for Cross-Task-List
func ParentWorkflow ( ctx workflow . Context ) error {
// Parent on task list A
childOptions := workflow . ChildWorkflowOptions {
TaskList : "task-list-b" , // Child on different task list
WorkflowID : "child-workflow-123" ,
ExecutionStartToCloseTimeout : time . Hour ,
}
ctx = workflow . WithChildOptions ( ctx , childOptions )
return workflow . ExecuteChildWorkflow ( ctx , ChildWorkflow ). Get ( ctx , nil )
}
5. Graceful Task List Migration
// Old workers on old task list
oldWorker := worker . New ( c , "old-task-list" , worker . Options {})
oldWorker . RegisterWorkflow ( MyWorkflow )
oldWorker . Start ()
// New workers on new task list
newWorker := worker . New ( c , "new-task-list" , worker . Options {})
newWorker . RegisterWorkflow ( MyWorkflow )
newWorker . Start ()
// Route new workflows to new task list
workflowOptions := client . StartWorkflowOptions {
TaskList : "new-task-list" ,
}
// Keep old workers running until existing workflows complete
// Then decommission old task list
Task List Operations
Describe Task List
resp , err := client . DescribeTaskList ( ctx , & shared . DescribeTaskListRequest {
Domain : "my-domain" ,
TaskList : & shared . TaskList {
Name : "my-task-list" ,
Kind : shared . TaskListKindNormal . Ptr (),
},
TaskListType : shared . TaskListTypeActivity . Ptr (),
IncludeTaskListStatus : true ,
})
fmt . Printf ( "Backlog: %d \n " , resp . GetBacklogCountHint ())
fmt . Printf ( "Rate: %.2f tasks/sec \n " , resp . TaskListStatus . GetRatePerSecond ())
List Task List Partitions
resp , err := matchingClient . ListTaskListPartitions ( ctx , & shared . ListTaskListPartitionsRequest {
Domain : "my-domain" ,
TaskList : & shared . TaskList { Name : "my-task-list" },
})
for _ , partition := range resp . ActivityTaskListPartitions {
fmt . Printf ( "Partition: %s , Owner: %s \n " ,
partition . GetKey (),
partition . GetOwnerHostName ())
}
Task List Throughput
Decision tasks : Typically 10-1000 per second per task list
Activity tasks : Can scale to 10,000+ per second
Factors : Worker count, task complexity, network latency
Polling Efficiency
workerOptions := worker . Options {
// Number of concurrent pollers
MaxConcurrentActivityTaskPollerSize : 5 ,
MaxConcurrentDecisionTaskPollerSize : 2 ,
// Poll rate limiting
MaxActivitiesPerSecond : 1000 ,
}
Avoid Over-Polling : Too many pollers wastes resources without improving throughput. Start with 2-5 pollers per task list.
Common Issues
Task Stuck in Queue
Problem : Tasks not being processed
Solutions :
Check workers are running: cadence tasklist describe --tasklist my-tasks
Verify task list name matches
Check worker has registered workflow/activity
Look for worker errors in logs
High Latency
Problem : Long ScheduleToStart times
Solutions :
Add more workers
Increase concurrent execution size
Add more pollers
Check worker resource usage
Task List Backlog
Problem : Growing queue of pending tasks
Solutions :
Scale out workers
Optimize activity execution time
Use rate limiting on workflow starts
Consider task list partitioning
Workers - Poll task lists and execute tasks
Workflows - Schedule tasks to task lists
Activities - Executed by workers from task lists
Domains - Task lists are scoped to domains
Further Reading