Skip to main content

Overview

Tasks are the fundamental computing units of a workflow. They define the different types of actions that a workflow can perform, including the ability to mutate their input and output data. Tasks can also write to and modify the context data, enabling complex and dynamic workflow behaviors.
All tasks share common properties for input/output transformation, error handling, timeouts, and flow control, making them consistent and predictable across different task types.

Task Types

The Serverless Workflow DSL defines several default task types that runtimes must implement:

Call Task

Used to call services and/or functions:
callExample:
  call: http
  with:
    method: get
    endpoint:
      uri: https://api.example.com/data
call
string
required
The function or service to call (e.g., http, grpc, openapi, asyncapi, a2a, or a custom function)
with
object
Parameters to pass to the function or service

Do Task

Used to define one or more subtasks to perform in sequence:
doExample:
  do:
    - step1:
        call: function1
    - step2:
        call: function2
    - step3:
        call: function3
do
array
required
A list of tasks to execute sequentially

Emit Task

Used to emit Cloud Events:
emitExample:
  emit:
    event:
      type: order.created
      source: https://api.example.com/orders
      data:
        orderId: ${ .orderId }
        customerId: ${ .customerId }
emit.event
object
required
The Cloud Event to emit, conforming to the CloudEvents specification

For Task

Used to iterate over a collection of items, conditionally performing a task for each:
forExample:
  for:
    each: item
    in: ${ .items }
    at: index
  do:
    processItem:
      call: processFunction
      with:
        item: ${ .item }
        index: ${ .index }
for.each
string
required
The name of the variable to store each item
for.in
string
required
Runtime expression that evaluates to the collection to iterate over
for.at
string
Optional name of the variable to store the current iteration index
do
array
required
Tasks to execute for each item

Fork Task

Used to define two or more subtasks to perform in parallel:
forkExample:
  fork:
    branches:
      - fetchUser:
          call: http
          with:
            method: get
            endpoint:
              uri: https://api.example.com/users/${ .userId }
      - fetchOrders:
          call: http
          with:
            method: get
            endpoint:
              uri: https://api.example.com/orders/${ .userId }
      - fetchPreferences:
          call: http
          with:
            method: get
            endpoint:
              uri: https://api.example.com/preferences/${ .userId }
fork.branches
array
required
List of task branches to execute in parallel
fork.compete
boolean
If true, the first branch to complete successfully wins and other branches are cancelled

Listen Task

Used to listen for one or more events:
listenExample:
  listen:
    to:
      any:
        - with:
            type: order.created
            source: https://api.example.com/orders
        - with:
            type: order.updated
            source: https://api.example.com/orders
listen.to
object
required
Defines the events to listen for using any or all semantics

Raise Task

Used to raise an error and potentially fault the workflow:
raiseExample:
  raise:
    error:
      type: https://example.com/errors/validation
      status: 400
      title: Validation Error
      detail: The provided data is invalid
raise.error
object
required
The error to raise, following RFC 7807 Problem Details format

Run Task

Used to run a container, script, shell command, or another workflow:

Container Process

runContainer:
  run:
    container:
      image: my-image:latest
      ports:
        http: 8080
      environment:
        LOG_LEVEL: debug

Script Process

runScript:
  run:
    script:
      language: javascript
      code: |
        const result = input.value * 2;
        return { doubled: result };
      arguments:
        value: ${ .inputValue }

Shell Process

runShell:
  run:
    shell:
      command: curl -X POST https://api.example.com/webhook
      arguments:
        data: ${ .payload }

Workflow Process

runWorkflow:
  run:
    workflow:
      namespace: utils
      name: data-processor
      version: '1.0.0'
      input: ${ .data }
run.container
object
Configuration for running a container
run.script
object
Configuration for executing a script
run.shell
object
Configuration for running a shell command
run.workflow
object
Configuration for running another workflow

Set Task

Used to dynamically set the workflow’s data during execution:
setExample:
  set:
    timestamp: ${ now }
    userId: ${ .currentUser.id }
    status: active
set
object
required
Key-value pairs to set in the workflow context

Switch Task

Used to dynamically select and execute one of multiple alternative paths:
switchExample:
  switch:
    - when: ${ .priority == "high" }
      then: handleHighPriority
    - when: ${ .priority == "medium" }
      then: handleMediumPriority
    - when: ${ .priority == "low" }
      then: handleLowPriority
switch
array
required
Array of conditional branches with when conditions and then targets

Try Task

Used to attempt executing a task and handle errors gracefully:
tryExample:
  try:
    call: http
    with:
      method: get
      endpoint:
        uri: https://api.example.com/data
  catch:
    errors:
      with:
        status: 503
    retry:
      delay:
        seconds: 3
      backoff:
        exponential: {}
      limit:
        attempt:
          count: 5
try
object
required
The task to attempt
catch
object
required
Error handling configuration

Wait Task

Used to pause or wait for a specified duration:
waitExample:
  wait:
    seconds: 30
waitUntil:
  wait: ${ .scheduledTime }
wait
string | object
required
Either a duration object (seconds, minutes, hours, days) or an ISO 8601 timestamp

Common Task Properties

All tasks share a set of common properties that provide consistent behavior:

Input Transformation

taskName:
  input:
    schema:
      document:
        type: object
        properties:
          userId:
            type: string
        required:
          - userId
    from: ${ { userId: .user.id } }
  call: getUserData
  with:
    id: ${ .userId }
input.schema
object
JSON Schema to validate task input before execution
input.from
string
Runtime expression to transform raw task input. Defaults to ${ . }
The input.from expression evaluates the raw task input and produces the transformed input that will be set as the $input runtime argument.

Output Transformation

taskName:
  call: processData
  with:
    data: ${ .inputData }
  output:
    as: ${ { result: .status, processedAt: now } }
    schema:
      document:
        type: object
        properties:
          result:
            type: string
          processedAt:
            type: string
output.as
string
Runtime expression to transform raw task output. Defaults to ${ . }
output.schema
object
JSON Schema to validate task output after transformation

Context Export

taskName:
  call: fetchUserData
  export:
    as: ${ . + { userData: $output } }
    schema:
      document:
        type: object
        properties:
          userData:
            type: object
export.as
string
Runtime expression to update the workflow context. Defaults to returning existing context
export.schema
object
JSON Schema to validate the exported context
The export.as expression evaluates the transformed task output and produces the new workflow context, which replaces the current $context.

Conditional Execution

taskName:
  if: ${ .status == "active" }
  call: processActiveUser
  with:
    userId: ${ .userId }
if
string
Runtime expression that determines whether the task should execute. If false, the task is skipped.

Timeout Configuration

taskName:
  call: longRunningOperation
  timeout:
    after:
      minutes: 5
timeout.after
object
Duration object specifying when the task should timeout
When a timeout occurs, runtimes must abruptly interrupt the task execution and raise an error with type https://serverlessworkflow.io/spec/1.0.0/errors/timeout and status 408.

Flow Directives

do:
  - task1:
      call: function1
      then: task3  # Skip task2 and go directly to task3
  - task2:
      call: function2
  - task3:
      call: function3
      then: end    # End workflow after this task
  - task4:
      call: function4
then
string
Specifies the next task to execute or end to gracefully end the workflow
Flow directives may only redirect to tasks declared within their own scope. They cannot target tasks at a different depth.

Task Lifecycle

Tasks go through several phases during their execution:

Status Phases

PhaseDescription
pendingThe task has been created and is pending execution
runningThe task is currently executing
waitingThe task is waiting for an event or time interval
suspendedThe task has been manually paused
cancelledThe task was cancelled before completion
faultedThe task encountered an error
completedThe task completed successfully

Execution Flow

Task Execution Steps

1

Condition evaluation

If an if condition is defined, evaluate it. Skip the task if false.
2

Input validation

Validate the raw task input against input.schema if defined.
3

Input transformation

Transform the raw input using input.from expression.
4

Task execution

Execute the task with the transformed input.
5

Output transformation

Transform the raw output using output.as expression.
6

Output validation

Validate the transformed output against output.schema if defined.
7

Context export

Update the workflow context using export.as expression.
8

Context validation

Validate the exported context against export.schema if defined.

Task Naming and References

Tasks are defined as key-value pairs where the key is the task name:
do:
  - myTaskName:      # This is the task name
      call: http
      with:
        method: get
        endpoint:
          uri: https://api.example.com/data
Task names must be unique within their scope and should be descriptive of the task’s purpose.
Tasks can be referenced using their path in the workflow:
do:
  - parentTask:
      do:
        - childTask:      # Reference: /do/0/parentTask/do/0/childTask
            call: function1
        - anotherChild:   # Reference: /do/0/parentTask/do/1/anotherChild
            call: function2

Best Practices

Use Descriptive Names

# Good
fetchUserProfile:
  call: http
  with:
    method: get
    endpoint:
      uri: https://api.example.com/users/${ .userId }

# Bad
task1:
  call: http
  with:
    method: get
    endpoint:
      uri: https://api.example.com/users/${ .userId }

Validate Inputs and Outputs

processData:
  input:
    schema:
      document:
        type: object
        properties:
          data:
            type: array
        required:
          - data
  call: processFunction
  with:
    items: ${ .data }
  output:
    schema:
      document:
        type: object
        properties:
          result:
            type: string

Handle Errors Gracefully

fetchData:
  try:
    call: http
    with:
      method: get
      endpoint:
        uri: https://api.example.com/data
  catch:
    errors:
      with:
        status: 503
    retry:
      delay:
        seconds: 2
      backoff:
        exponential: {}
      limit:
        attempt:
          count: 3

Set Appropriate Timeouts

longOperation:
  call: processLargeDataset
  with:
    dataset: ${ .data }
  timeout:
    after:
      minutes: 10

Transform Data Efficiently

fetchUser:
  call: http
  with:
    method: get
    endpoint:
      uri: https://api.example.com/users/${ .userId }
  output:
    as: ${ { id: .id, name: .name, email: .email } }  # Extract only needed fields

Common Patterns

Sequential Processing with Data Transformation

do:
  - fetchRawData:
      call: http
      with:
        method: get
        endpoint:
          uri: https://api.example.com/raw
      output:
        as: ${ .items }
  
  - transformData:
      for:
        each: item
        in: ${ .fetchRawData.output }
      do:
        processItem:
          call: transformFunction
          with:
            data: ${ .item }
  
  - storeResults:
      call: http
      with:
        method: post
        endpoint:
          uri: https://api.example.com/results
        body: ${ .transformData.output }

Parallel API Calls with Aggregation

do:
  - fetchAllData:
      fork:
        branches:
          - fetchUsers:
              call: http
              with:
                method: get
                endpoint:
                  uri: https://api.example.com/users
          - fetchOrders:
              call: http
              with:
                method: get
                endpoint:
                  uri: https://api.example.com/orders
          - fetchProducts:
              call: http
              with:
                method: get
                endpoint:
                  uri: https://api.example.com/products
  
  - aggregateResults:
      set:
        users: ${ .fetchAllData.fetchUsers.output }
        orders: ${ .fetchAllData.fetchOrders.output }
        products: ${ .fetchAllData.fetchProducts.output }

Conditional Task Execution

do:
  - checkStatus:
      call: getStatus
  
  - processIfActive:
      if: ${ .checkStatus.output.status == "active" }
      call: processActiveRecord
      with:
        data: ${ .checkStatus.output.data }
  
  - processIfPending:
      if: ${ .checkStatus.output.status == "pending" }
      call: processPendingRecord
      with:
        data: ${ .checkStatus.output.data }

Error Handling with Fallback

do:
  - tryPrimaryService:
      try:
        call: http
        with:
          method: get
          endpoint:
            uri: https://primary.example.com/api
      catch:
        errors:
          with:
            status: 503
        as: primaryError
  
  - useFallback:
      if: ${ .tryPrimaryService == null }
      call: http
      with:
        method: get
        endpoint:
          uri: https://fallback.example.com/api

Build docs developers (and LLMs) love