Event Sourcing Implementation
Temporal’s durable execution guarantee is built on event sourcing: an architectural pattern where state changes are stored as a sequence of immutable events. This page explains how Temporal implements event sourcing to achieve reliable workflow execution.What is Event Sourcing?
Event sourcing is a pattern where:- All state changes are captured as events
- Events are immutable and stored in append-only logs
- Current state can be reconstructed by replaying events from the beginning
- Historical state can be reconstructed by replaying events up to any point in time
Why Event Sourcing for Workflows?
Event sourcing is particularly well-suited for workflow execution:Durability
Durability
Events are persisted before being applied, ensuring no state is lost even if the server crashes.
Auditability
Auditability
Complete history of what happened and when, enabling debugging and compliance.
Replay-Based Execution
Replay-Based Execution
Workers can reconstruct workflow state by replaying events, enabling deterministic execution.
Time Travel
Time Travel
Can reconstruct state at any point in the past for debugging or workflow reset.
Conflict Resolution
Conflict Resolution
In multi-cluster setups, event logs can be merged and conflicts resolved.
Temporal’s Event Sourcing Model
Workflow History Events
In Temporal, Workflow History Events are the foundation of event sourcing:- Each workflow execution has its own linear sequence of events
- Events are assigned monotonically increasing EventIDs (starting at 1)
- Events are publicly exposed via APIs (unlike internal state)
- Events contain both metadata and optional payloads
Event Structure
Each event contains:Event Types
Temporal defines dozens of event types representing state transitions: Workflow Execution:WorkflowExecutionStartedWorkflowExecutionCompletedWorkflowExecutionFailedWorkflowExecutionTimedOutWorkflowExecutionCanceledWorkflowExecutionTerminated
WorkflowTaskScheduledWorkflowTaskStartedWorkflowTaskCompletedWorkflowTaskFailedWorkflowTaskTimedOut
ActivityTaskScheduledActivityTaskStartedActivityTaskCompletedActivityTaskFailedActivityTaskTimedOutActivityTaskCancelRequestedActivityTaskCanceled
TimerStartedTimerFiredTimerCanceled
StartChildWorkflowExecutionInitiatedChildWorkflowExecutionStartedChildWorkflowExecutionCompletedChildWorkflowExecutionFailed
WorkflowExecutionSignaledWorkflowExecutionUpdateAcceptedWorkflowExecutionUpdateCompleted
temporal/api/enums/v1/event_type.proto for the complete list.
Event Generation
Who Creates Events?
Events are created by the History Service in response to:- User Application Requests: Start, Signal, Cancel, Terminate, Update
- Worker Completions: Workflow Task completed, Activity Task completed
- Timer Expirations: Workflow timers, activity timeouts, workflow timeouts
- System Operations: Workflow reset, replication
Event Generation Flow
Event Attributes
Each event type has specific attributes. For example:ActivityTaskScheduled event:
Event Persistence
Storage Structure
Events are stored in the history_node table (Cassandra) or equivalent:- Events are grouped into batches (typically 1-100 events per batch)
- Each batch is a binary blob (protobuf serialized)
- Batches are identified by
(namespaceID, workflowID, runID, nodeID) - NodeID represents the batch number
Batching Strategy
Events are batched for efficiency:- Multiple events per write: Reduces database round-trips
- Configurable batch size: Balance between latency and throughput
- Atomic batch writes: All events in a batch are written together
Event Versioning
For workflow reset and replication:- Events are organized in a tree structure
- Each branch has a branchToken
- Enables branching history for reset scenarios
State Reconstruction
Mutable State as Cache
While events are the source of truth, Temporal maintains Mutable State as a cache:- Summarizes current workflow state
- Avoids replaying entire history on every request
- Persisted alongside events
- Can be regenerated from events if corrupted
Consistency Model
Mutable State and Events are kept consistent:Mutable State stores the
LastEventID it reflects. If Mutable State is lost or corrupted, it can be rebuilt by replaying events from the beginning.Replay Process
When a worker receives a Workflow Task:- Receive History Events: Worker gets events since last Workflow Task
- Replay Workflow Code: Worker executes workflow function
- Compare Commands: Worker’s commands must match history exactly
- Non-Determinism Detection: If commands diverge, workflow fails
Deterministic Execution
Why Determinism Matters
Event sourcing requires deterministic replay:- Workflow code must produce the same commands when replayed
- Same events in, same commands out
- Enables workers to resume workflows without state serialization
Determinism Rules
Workflow code must:Do:
- Use workflow APIs for time, randomness, UUIDs
- Schedule Activities for side effects
- Use Signals/Queries for external input
- Write pure, deterministic logic
Enforcing Determinism
Temporal SDKs enforce determinism:- Workflow sandbox: Restricts available APIs
- Deterministic time:
workflow.Now()returns time from events - Deterministic random:
workflow.Random()uses seeded RNG - Deterministic UUID:
workflow.UUID()generates deterministic UUIDs
Event-Driven State Machine
Commands as Intent
Workers don’t directly modify state; they express intent via Commands:ScheduleActivityTaskStartTimerStartChildWorkflowExecutionCompleteWorkflowExecutionCancelTimer
Command Processing
History Service processes commands:- Validate: Check if command is valid given current state
- Generate Events: Create events representing the command’s effect
- Update State: Apply events to Mutable State
- Create Tasks: Schedule follow-up tasks (Transfer, Timer)
Querying Event History
GetWorkflowExecutionHistory API
Users can retrieve workflow history:Event Filtering
Filters available:- All Events: Complete history
- Close Event: Only workflow completion event
Pagination
History can be large:- Events are paginated (default page size: 1000)
- Use continuation tokens for next page
- Can be streamed efficiently
Advanced Topics
Workflow Reset
Reset allows “rewinding” a workflow:- Choose a reset point (EventID)
- Create a new branch from that point
- Original branch is marked as ancestor
- New execution continues from reset point
Multi-Cluster Replication
For global namespaces:- Events are replicated across clusters
- Each cluster maintains its own event log
- Version vectors track causal relationships
- Conflict resolution merges divergent histories
History Archival
Old workflows can be archived:- Events moved from primary database to archival storage (S3, GCS)
- Reduces load on primary database
- Archived history can be retrieved on demand
- Configurable retention period before archival
History Compression
For long-running workflows:- History can be compressed (gzip, etc.)
- Reduces storage costs
- Transparent decompression on read
Performance Considerations
Event Size
History Size
Long-running workflows accumulate events:- Use Continue-As-New to “reset” a workflow’s history
- Starts a new run with same workflow ID
- Previous run’s history is preserved but not loaded
Read Optimization
Mutable State caching:- Recently accessed workflows are cached in memory
- Avoids reading history from database
- Significantly improves latency
- Cache invalidation on state changes
Comparison with Other Patterns
vs. CRUD (Mutable State)
Traditional CRUD:- Overwrites state in place
- No history of changes
- Difficult to debug past issues
- Appends immutable events
- Complete audit trail
- Can reconstruct any past state
vs. Event Streaming (Kafka)
Kafka-style event streaming:- Generic event transport
- No built-in state management
- Consumers maintain their own state
- Specialized for workflow execution
- Integrated state management (Mutable State)
- Deterministic replay semantics
- Strongly typed event schemas
Further Reading
History Service
How History Service manages events
Workflow Lifecycle
See event generation in action
Architecture Overview
High-level system architecture
SDK Documentation
Writing deterministic workflows