Consistency Guarantees
YugabyteDB offers different consistency guarantees depending on the operation type and configuration:Single-Row Linearizability
Single-row operations appear atomic and in real-time order
Multi-Row ACID
Distributed transactions with Serializable, Snapshot, or Read Committed isolation
Timeline Consistency
Follower reads provide monotonic, causally-consistent views
Eventual Consistency
Async replication (xCluster) provides eventual consistency across universes
Single-Row Linearizability
YugabyteDB guarantees linearizability for single-row operations, one of the strongest consistency models.Definition
Linearizability means:- Atomicity: Every operation appears to take effect instantaneously at some point between invocation and response
- Real-time ordering: If operation A completes before operation B begins, then B observes the effects of A
- Sequential consistency: Operations appear in a total order consistent with real-time
Example
Linearizability ensures that once a write completes, all subsequent reads (that start after completion) see that write or a later value.
How It’s Achieved
YugabyteDB achieves single-row linearizability through:- Raft consensus: Writes replicated to majority before acknowledgment
- Leader leases: Only current leader serves reads and writes
- Hybrid logical clocks: Provide causally-ordered timestamps
- Leader-only reads: Default reads go to tablet leader
Transaction Isolation Levels
YugabyteDB supports three SQL isolation levels in YSQL, each with different consistency and performance tradeoffs.Serializable Isolation
- Strongest isolation level
- Transactions appear to execute in serial order
- No anomalies possible (no dirty reads, non-repeatable reads, phantoms, write skew, or read skew)
- Tracks both reads and writes
- Detects conflicts between concurrent transactions
- Aborts transactions with serialization errors when conflicts detected
Snapshot Isolation (Repeatable Read)
- Reads see consistent snapshot at transaction start time
- No dirty reads or non-repeatable reads
- Prevents lost updates
- Does not prevent: Write skew or read-only anomalies
- Each transaction gets a snapshot timestamp
- Reads use MVCC to access data at snapshot time
- Writes detect conflicts with concurrent updates
- First-committer-wins conflict resolution
Snapshot isolation is the default for YCQL and provides a good balance between consistency and performance for most OLTP workloads.
Read Committed Isolation
- Each statement sees latest committed data
- No dirty reads (uncommitted data)
- Allows: Non-repeatable reads and phantoms
- Each statement gets its own snapshot timestamp
- Different statements in same transaction may see different data
- Lower conflict probability, higher concurrency
Isolation Level Comparison
| Anomaly | Read Committed | Snapshot (Repeatable Read) | Serializable |
|---|---|---|---|
| Dirty Read | ❌ Prevented | ❌ Prevented | ❌ Prevented |
| Non-Repeatable Read | ✅ Possible | ❌ Prevented | ❌ Prevented |
| Phantom Read | ✅ Possible | ❌ Prevented | ❌ Prevented |
| Write Skew | ✅ Possible | ✅ Possible | ❌ Prevented |
| Read Skew | ✅ Possible | ✅ Possible | ❌ Prevented |
Follower Reads (Timeline Consistency)
Follower reads provide lower latency by reading from tablet followers, with timeline consistency guarantees.What is Timeline Consistency?
- Monotonic reads: Once you read a value, subsequent reads never return older values
- Causally consistent: If update A causes update B, you never see B without A
- Bounded staleness: Data may be slightly stale (configurable bound)
- No out-of-order: Never see updates in wrong order
Enabling Follower Reads
- Globally distributed applications
- Analytics queries that tolerate staleness
- Read scaling in read-heavy workloads
- Lower latency for geo-distributed reads
Consistency vs Performance Tradeoffs
Leader Reads (Linearizable)
Leader Reads (Linearizable)
Pros:
- Strongest consistency (linearizable)
- Always returns latest data
- No stale reads
- Higher latency (must contact leader)
- Lower read throughput (leader bottleneck)
- Network hops in geo-distributed setups
Follower Reads (Timeline Consistent)
Follower Reads (Timeline Consistent)
Pros:
- Lower latency (read from nearby follower)
- Higher read throughput (distribute across followers)
- Better geo-distribution support
- Bounded staleness (not linearizable)
- May read slightly outdated data
- Requires configuring staleness bounds
Serializable Transactions
Serializable Transactions
Pros:
- Prevents all anomalies
- Strongest correctness guarantees
- Simplifies application logic
- More transaction aborts
- Higher latency due to conflict detection
- Lower throughput under contention
Read Committed Transactions
Read Committed Transactions
Pros:
- Fewest transaction conflicts
- Highest throughput
- PostgreSQL compatible
- Application must handle anomalies
- Complex invariants harder to maintain
- May need application-level locking
CAP Theorem and YugabyteDB
YugabyteDB is a CP (Consistent and Partition-tolerant) system with very high availability.During Normal Operation
- ✅ Consistency: Linearizable single-row operations
- ✅ Availability: All nodes serve reads and writes
- ✅ Partition Tolerance: N/A (no partition)
During Network Partition
- ✅ Consistency: Majority partition maintains consistency via Raft
- ⚠️ Availability: Minority partition cannot serve writes (reads depend on configuration)
- ✅ Partition Tolerance: System continues operating in majority partition
With RF=3, the system can tolerate 1 node failure. The 2 remaining nodes form a majority and continue serving all operations.
Leader Leases Prevent Split-Brain
- Leader lease duration: 2 seconds (configurable)
- Only one leader can have a valid lease at any time
- Old leader steps down when lease expires
- New leader waits out old lease before serving requests
- Small unavailability window during failover (~2-3 seconds)
Consistency in Multi-Region Deployments
Preferred Region
Pin tablet leaders to a specific region for consistent low-latency writes:- All writes go to preferred region leaders
- Followers in other regions provide follower reads
- Synchronous replication ensures durability
- Failover to other regions if preferred region fails
Geo-Partitioning
Place specific rows in specific regions:- Data locality: Rows physically close to users
- Reduced latency: Fewer cross-region hops
- Compliance: Data residency requirements
- Linearizability: Within each partition
Tuning Consistency
For Lowest Latency
- Enable follower reads
- Use Read Committed isolation
- Set staleness tolerance appropriately
- Place followers near applications
For Strongest Consistency
- Use leader reads (default)
- Use Serializable isolation
- Disable follower reads
- Accept higher latency
For High Throughput
- Use Snapshot/Read Committed isolation
- Enable follower reads for read-heavy workloads
- Optimize transaction batch sizes
- Minimize transaction scope
For Geo-Distribution
- Use preferred regions for write locality
- Use geo-partitioning for data residency
- Enable follower reads in remote regions
- Consider async replication (xCluster) for DR
Monitoring Consistency
Key metrics to monitor:Best Practices
Choose Appropriate Isolation Level
Choose Appropriate Isolation Level
- Default to Snapshot isolation for most workloads
- Use Serializable only when preventing all anomalies is critical
- Use Read Committed for maximum concurrency
- Test your workload at different levels
Design for Conflicts
Design for Conflicts
- Minimize transaction duration
- Avoid hot keys (frequently updated rows)
- Use optimistic locking with version columns
- Implement exponential backoff retry logic
Use Follower Reads Strategically
Use Follower Reads Strategically
- Enable for analytics and reporting
- Set realistic staleness bounds
- Don’t use for operations requiring latest data
- Monitor follower lag metrics
Handle Errors Gracefully
Handle Errors Gracefully
- Catch and retry serialization errors (40001)
- Implement circuit breakers for cascading failures
- Log and alert on high conflict rates
- Consider application-level conflict resolution
Next Steps
Distributed Transactions
Deep dive into transaction implementation
Replication
Learn how Raft ensures consistency
Data Model
Understand MVCC and versioning
Architecture
Review the overall system design

