Available Benchmarks
Thexmtp_mls crate includes several benchmark suites:
- group_limit: Benchmarks for maximum members adding/removing from groups
- crypto: Benchmarks for cryptographic functions
- identity: Benchmarks for identity operations
- groups: Benchmarks for group operations
- messages: Benchmarks for message handling
- consent: Benchmarks for consent operations
- sync_conversations: Benchmarks for conversation synchronization
Running Benchmarks
Run All Benchmarks
The simplest way to run all benchmarks:Run a Specific Benchmark
Run a single named benchmark:Run a Benchmark Category
Run all benchmarks in a specific category:All benchmark commands require the
bench feature flag.Benchmark Categories
Group Limit Benchmarks
Test group performance with varying member counts:Crypto Benchmarks
Benchmark cryptographic operations:Identity Benchmarks
Benchmark identity-related operations:Groups Benchmarks
Benchmark general group operations:Messages Benchmarks
Benchmark message sending and processing:Consent Benchmarks
Benchmark consent management:Sync Conversations Benchmarks
Benchmark conversation synchronization:Running Against Dev gRPC
To run benchmarks against the development gRPC server:Make sure the development gRPC server is running before using
DEV_GRPC=1.Profiling with Flamegraphs
Generate a flamegraph to visualize performance bottlenecks:Benchmark Features
Benchmarks are gated behind thebench feature, which includes:
- Test utilities
- Progress indicators (via
indicatif) - Tracing and logging
- Criterion for benchmark framework
- File descriptor limit management (via
fdlimit) - Performance optimization tools
Cargo.toml:
Understanding Benchmark Results
Criterion produces detailed output including:- Time: Mean execution time with confidence intervals
- Throughput: Operations per second (where applicable)
- Change: Performance change compared to previous runs
- Outliers: Statistical outliers in the measurements
Best Practices
Close unnecessary applications
Ensure consistent results by closing resource-intensive applications before running benchmarks.
Run multiple iterations
Criterion automatically runs multiple iterations, but you can increase iterations for more stable results.
Continuous Integration
Benchmarks can be run in CI to track performance over time. The results help identify performance regressions before they reach production.Troubleshooting
File Descriptor Limits
If you encounter file descriptor limit errors, thefdlimit dependency (included with the bench feature) should automatically handle this. If issues persist, manually increase your system’s file descriptor limit.
Memory Issues
For large-scale benchmarks (like group_limit with many members), ensure you have sufficient memory available. Consider running benchmarks individually rather than all at once.Inconsistent Results
If benchmark results are inconsistent:- Close background applications
- Disable CPU frequency scaling (if possible)
- Run benchmarks multiple times and look for patterns
- Check for system resource contention
