Overview
Thebench command:
- Creates a temporary stream in the specified basin
- Writes records at a target throughput for a specified duration
- Simultaneously reads records to measure live read performance
- Waits, then performs a catchup read to measure historical read performance
- Verifies data integrity using hash chains
- Reports detailed statistics
- Deletes the temporary stream
Basic usage
Run a benchmark on a basin:- Record size: 8 KiB
- Target throughput: 1 MiB/s
- Duration: 60 seconds
- Catchup delay: 20 seconds
Customize benchmark parameters
Record size
Set the metered record size (includes headers and overhead):Target throughput
Set target write throughput in MiB/s:Duration
Set how long to run the write workload:s (seconds), m (minutes), h (hours)
Catchup delay
Set delay before starting the catchup read:Storage class
Specify the storage class for the test stream:Example benchmarks
High-throughput test
Low-latency test
Standard storage test
Understanding benchmark output
During the benchmark, you’ll see real-time progress:Metrics explained
Throughput
- MiB/s - Megabytes per second (1 MiB = 1,048,576 bytes)
- rec/s - Records per second
Latency
Ack Latency - Time from submitting a record to receiving acknowledgment from S2.- Lower is better for write operations
- Measures write path performance
- Lower is better for real-time applications
- Measures total system latency
Statistics
- min - Minimum latency observed
- median - 50th percentile (half of requests faster)
- p90 - 90th percentile (90% of requests faster)
- p99 - 99th percentile (99% of requests faster)
- max - Maximum latency observed
Catchup read
After the write workload completes, the benchmark waits (default 20s), then performs a catchup read:- Tests how quickly historical data can be read
- Verifies all written records are readable
- Often shows higher throughput than live reads
Data integrity verification
The benchmark verifies data integrity using:- Hash chains - Each record’s hash depends on the previous record
- Record counting - Ensures all written records are read back
- Body validation - Verifies record body size matches expected
Rate limiting
The benchmark implements time-based rate limiting to achieve the target throughput:- Calculates expected bytes submitted vs. time elapsed
- Throttles writes to match target MiB/s
- Accounts for network latency and batching
Early termination
Press Ctrl+C to stop the benchmark early:- Stop writing new records
- Wait for pending acks
- Verify records written so far
- Delete the temporary stream
Temporary stream naming
The benchmark creates a stream with the pattern:s2://my-basin/bench/550e8400-e29b-41d4-a716-446655440000
The stream is configured with:
- Retention: 1 hour
- Delete on empty: 60 seconds after becoming empty
- Timestamping: Client-required, uncapped
Common benchmark scenarios
Baseline performance
Establish baseline metrics for your basin:Compare storage classes
Stress test
Test maximum sustainable throughput:--target-mibps until you see increased latency or errors.
Latency characterization
Measure latency at different throughput levels:Benchmark best practices
Run for sufficient duration
Run benchmarks for at least 60 seconds to:- Allow the system to reach steady state
- Collect sufficient samples for accurate statistics
- Average out transient network issues
Multiple runs
Run benchmarks multiple times and average results:Consider time of day
Network conditions vary. Run benchmarks:- At different times of day
- During peak and off-peak hours
- From different geographic locations
Match production workload
Configure benchmark parameters to match your production use case:Troubleshooting
Throughput below target
If actual throughput is significantly below target:- Check network bandwidth
- Enable compression:
s2 config set compression zstd - Try larger record sizes (reduces per-record overhead)
High latency
If latency is higher than expected:- Try express storage class
- Check network latency to S2 endpoints
- Run benchmark during off-peak hours
Verification errors
If you see data integrity errors:- Re-run the benchmark
- Check for network issues
- Contact S2 support if errors persist
Out of memory
For very high throughput tests:Interpreting results
Good performance indicators
- Write throughput matches target ±5%
- Live read throughput close to write throughput
- Catchup read throughput significantly higher than live read
- p99 latency < 100ms for express storage
- p99 latency < 500ms for standard storage
- Max latency < 2x p99 latency
Performance tuning
If results don’t meet expectations:-
Enable compression (if not already):
- Use express storage for latency-sensitive workloads
- Optimize record size - Larger records improve throughput efficiency
- Batch writes in your application (CLI does this automatically)