Overview
VecLabs includes a comprehensive benchmark suite built with Criterion.rs, a statistical benchmarking framework for Rust. Benchmarks measure performance across HNSW operations, distance functions, and various vector dimensions.The published benchmark results show p50 = 1.9ms, p95 = 2.8ms, p99 = 4.3ms for queries on 100K vectors at 384 dimensions.
Running Benchmarks
Benchmarks are located in the workspace and can be run with Cargo.Run all benchmarks
Benchmark Suites
HNSW Operations
Thehnsw_bench.rs benchmark measures core HNSW operations:
Insert Performance
Measures insertion time for building indices of 1K and 10K vectors
Query Performance
Measures query latency with top-K values of 1, 10, and 100
Multi-dimensional Query
Tests query performance across 128, 384, 768, and 1536 dimensions
Index Size Scaling
Validates query performance with 10K vector indices
Distance Functions
Thedistance_bench.rs benchmark measures raw distance calculation performance:
- Cosine similarity: Used for most embedding models
- Euclidean distance: L2 distance for spatial data
- Dot product: Fast inner product computation
Benchmark Configuration
Benchmarks are configured inCargo.toml:
harness = false setting tells Cargo to use Criterion’s custom harness instead of the default test harness.
Understanding Results
Criterion provides statistical analysis of benchmark results:- Console Output
- HTML Reports
- time: Median time with confidence interval
- change: Performance change vs. previous run
- p-value: Statistical significance
Performance Targets
VecLabs aims for the following performance characteristics:| Operation | Target | Measured (M2, 16GB) |
|---|---|---|
| Query p50 (100K vectors, 384d) | < 2ms | 1.9ms |
| Query p95 (100K vectors, 384d) | < 3ms | 2.8ms |
| Query p99 (100K vectors, 384d) | < 5ms | 4.3ms |
| Insert 10K vectors | < 10s | ~8.2s |
| Cosine similarity (384d) | < 10µs | ~6.3µs |
All performance targets are currently met or exceeded on Apple M2 hardware.
Comparing with Other Vector Databases
VecLabs outperforms popular vector databases on both latency and cost:vs. Pinecone s1
- 4.2x faster at p50 (1.9ms vs ~8ms)
- 5.4x faster at p95 (2.8ms vs ~15ms)
- 8.8x cheaper (70/mo for 1M vectors)
vs. Qdrant
- 2.1x faster at p50 (1.9ms vs ~4ms)
- 3.2x faster at p95 (2.8ms vs ~9ms)
- 3.1x cheaper (25+/mo)
vs. Weaviate
- 6.3x faster at p50 (1.9ms vs ~12ms)
- 8.9x faster at p95 (2.8ms vs ~25ms)
- 3.1x cheaper (25+/mo)
Unique Features
- Data ownership: Encrypted with your Solana wallet
- Audit trail: On-chain Merkle roots
- No GC latency: Pure Rust, no garbage collector
Full benchmark methodology is documented in the README. Comparison benchmarks are measured on equivalent hardware configurations.
Benchmark Parameters
The benchmark suite uses the following HNSW parameters:- M = 16: Higher values improve recall but increase memory
- ef_construction = 200: Higher values improve graph quality but slow inserts
- Cosine distance: Most common metric for text embeddings
Running Benchmarks on Different Hardware
To compare performance on your hardware:Interpreting Benchmark Plots
Criterion generates several types of plots:PDF Plot (Probability Density Function)
PDF Plot (Probability Density Function)
Shows the distribution of benchmark times. A narrow peak indicates consistent performance; a wide or multi-modal distribution suggests variance.
Regression Plot
Regression Plot
Shows mean execution time across iterations. Should be relatively flat; upward trends indicate warmup or memory pressure.
Iteration Times
Iteration Times
Raw scatter plot of all measurements. Outliers may indicate GC pauses (not applicable to Rust) or OS scheduling.
Comparison Plot
Comparison Plot
Side-by-side violin plots comparing current run vs. previous baseline.
Benchmark Best Practices
Custom Benchmarks
To add your own benchmarks:- Create a new file in
benchmarks/orcrates/solvec-core/benches/ - Follow the Criterion.rs API:
- Register in
Cargo.toml:
CI Benchmarks
Continuous benchmark tracking is planned for future releases. This will automatically detect performance regressions in pull requests.Next Steps
Building from Source
Set up your development environment
Running Tests
Validate functionality with the test suite