Overview
Thecockroach workload command provides built-in workload generators for testing, benchmarking, and demonstrating CockroachDB. These workloads simulate realistic application patterns and are useful for:
- Performance testing and benchmarking
- Load testing and capacity planning
- Demonstrating CockroachDB features
- Developing and testing optimizations
Available Workloads
CockroachDB includes several built-in workloads based on industry-standard benchmarks and common patterns:Bank
Simulates a banking application with account transfers and balance checks. Tests transactional correctness.
KV
Simple key-value read/write operations. Good for basic throughput testing.
TPCC
TPC-C benchmark simulating order entry workload. Industry-standard OLTP benchmark.
YCSB
Yahoo! Cloud Serving Benchmark. Configurable read/write/scan patterns.
Other Workloads
- movr - Multi-region ride-sharing application
- tpch - TPC-H decision support benchmark (analytics)
- kv - Key-value operations
- bulkingest - Bulk data ingestion patterns
Basic Commands
Initialize a Workload
Create the schema and initial data:Run a Workload
Execute the workload against the database:Common Workload Examples
Bank Workload
Simulates a banking application with account transfers.- Initialize
- Run workload
- Output
banktable with 1000 initial accounts- Each account starts with balance of $1000
KV Workload
Simple key-value read and write operations.Percentage of operations that are reads (0-100). Remainder are writes.
Percentage of operations that are spanning reads (scans).
Number of splits to apply to the table.
TPCC Workload
Industry-standard TPC-C order entry benchmark.TPCC is CPU and memory intensive. Each warehouse requires approximately 200MB of data. Start with fewer warehouses for testing.
YCSB Workload
Configurable benchmark with different workload profiles.- A: Update heavy (50% reads, 50% updates)
- B: Read mostly (95% reads, 5% updates)
- C: Read only (100% reads)
- D: Read latest (95% reads, 5% inserts)
- E: Scan heavy (95% scans, 5% inserts)
- F: Read-modify-write (50% reads, 50% read-modify-writes)
Common Flags
How long to run the workload (e.g.,
1h, 30m, 90s). 0 means run forever.Number of concurrent workers.
Maximum rate of operations per second. 0 means unlimited.
Ramp-up period before collecting results.
How often to display stats.
Output file for latency histograms.
Continue running even if errors occur.
Multi-Region Testing
Workloads can test multi-region performance:Performance Testing Best Practices
Output Metrics
Workload output shows:- _elapsed: Time since workload started
- _errors: Cumulative error count
- ops/sec(inst): Instantaneous operations per second
- ops/sec(cum): Cumulative average operations per second
- p50(ms): 50th percentile latency
- p95(ms): 95th percentile latency
- p99(ms): 99th percentile latency
- pMax(ms): Maximum latency
Connection Strings
Workloads support standard PostgreSQL connection strings:Troubleshooting
Low throughput / high latency
Low throughput / high latency
Possible causes:
- Insufficient cluster resources
- Network bottlenecks
- Workload client CPU saturated
- Replication lag
- Increase concurrency:
--concurrency=100 - Run workload from multiple machines
- Add more nodes to cluster
- Check cluster CPU and disk utilization
Transaction retry errors
Transaction retry errors
Normal for some workloads (especially bank). CockroachDB uses optimistic concurrency.Solutions:
- Use
--tolerate-errorsto continue despite retries - Reduce contention with
--concurrency - For bank workload, increase
--rowsto reduce conflicts
Out of memory errors
Out of memory errors
Workload or cluster running out of memory.Solutions:
- Reduce data size:
--rows=1000instead of millions - Reduce concurrency
- Increase cluster memory
- For TPCC, reduce
--warehouses
Related Documentation
- Performance Benchmarking - Benchmarking methodology
- Performance Optimization - Tuning cluster performance
- Cluster Configuration - Cluster resource settings