Pipeline Architecture
This pipeline implements a message transformation workflow:Complete Example
Task Definitions
Persist Parameter Task
Writes the input message to both a file and a result:Transform Tasks
Both upper and lower tasks follow the same pattern - read from workspace, transform, write to both workspace and results:How It Works
Initialize Message
The
starter task receives the message parameter “Hello Tekton” and writes it to:- File:
ws/init/message - Result:
message
subPath: init to write to a subdirectory of the shared workspace.Parallel Transformation
Both
upper and lower tasks:- Wait for
starterto complete (viarunAfter) - Run in parallel (no dependency between them)
- Read from
ws/init/message - Transform the text
- Write to both workspace files and results
- File:
ws/uppercontaining “HELLO TEKTON” - Result: “HELLO TEKTON”
- File:
ws/lowercontaining “hello tekton” - Result: “hello tekton”
Report Result
The
reporter task:- Waits for
upperto complete - Receives the uppercase message via result:
$(tasks.upper.results.message) - Prints it (simulating sending to an external service)
- Does NOT use a workspace, so can be scheduled on any node
Data Flow Patterns
This pipeline demonstrates two ways to pass data between tasks:Via Workspace Files
- Large data (files, artifacts)
- Binary data
- Multiple files
Via Task Results
- Small text data (< 4KB)
- Commit SHAs, version numbers
- Status flags
- Metadata
Execution Timeline
Workspace Organization
The shared workspace is organized with subpaths:Affinity Assistant
Since multiple tasks use the same workspace, Tekton’s Affinity Assistant ensures they run on the same node. Thereporter task doesn’t use a workspace, so it can run on any node.
You can disable the Affinity Assistant if your cluster has ReadWriteMany volumes or other shared storage solutions.
Expected Output
starter task:Real-World Applications
This pattern is useful for:- Multi-architecture builds: Build for amd64 and arm64 in parallel
- Test suites: Run unit, integration, and e2e tests in parallel
- Multi-environment deploys: Deploy to dev and staging in parallel
- Data processing: Transform data in parallel pipelines
- Notifications: Send status to multiple services
Key Concepts
- Diamond Dependencies: Tasks converge and diverge in the dependency graph
- Parallel Execution: Independent tasks run simultaneously
- Data Persistence: Using workspaces for file-based data sharing
- Result Propagation: Using results for lightweight data passing
- Workspace Subpaths: Organizing data within a shared workspace
- Implicit vs Explicit Dependencies: Using both
runAfterand result references - Affinity Scheduling: Tasks sharing workspaces run on the same node
Performance Considerations
- Parallelization: The pipeline completes faster because upper and lower run in parallel
- Node Affinity: Workspace-sharing tasks avoid data transfer between nodes
- Resource Requests: The reporter task can use spare capacity on any node
Next Steps
- Explore git clone and build workflows
- Learn about building Docker images
- See finally tasks for cleanup