Core Design
RotorTree is a specialized database for append-only Merkle trees, built from first principles for high-throughput privacy protocols. It implements an n-ary lean incremental Merkle tree (Lean IMT) with structural sharing and optional persistent storage.The design is inspired by lean-imt by cedoor & vivian @ PSE, but extends it to support n-ary trees (not just binary) and adds sophisticated storage tiering.
Key Components
RotorTree consists of three main layers:1. Tree Algorithm Layer (LeanIMT)
The core in-memory tree implementation:
- Type:
LeanIMT<H: Hasher, const N: usize, const MAX_DEPTH: usize> - Branching Factor: Compile-time constant
N(must be ≥ 2) - Max Depth: Compile-time constant
MAX_DEPTH(must be ≥ 1) - Features:
- Single-threaded by default (
&mut selfmethods) - Optional
concurrentfeature (internalRwLockfor&selfmethods) - Optional
parallelfeature (Rayon-parallelized batch insertions) - Generic hasher (Blake3 by default)
- Single-threaded by default (
2. Storage Layer (RotorTree)
Optional persistence via Write-Ahead Log (WAL) and checkpointing:
- WAL: Buffered writes with configurable flush policies
- Checkpointing: Materializes tree state to data files, enables WAL truncation
- Recovery: Replays WAL on startup, optionally verifies root hash
- Durability Tokens: Allows tracking when writes are fsynced
3. Proof Generation Layer
Lock-free snapshots for proof generation:- Inclusion Proofs:
NaryProof<N, MAX_DEPTH> - Consistency Proofs:
ConsistencyProof<N, MAX_DEPTH> - Proof Updates: Update old proofs to new tree states
- Serialization: Optional
serdeandwincodesupport
Architecture Diagram
Design Principles
Append-Only Operations
Append-Only Operations
RotorTree only supports appending new leaves. There is no in-place update or deletion. This simplifies the design and enables efficient structural sharing.
Generic Hashing
Generic Hashing
The tree is generic over the hasher type. Blake3 is the default, but you can use any hasher that implements the
Hasher trait.Assumptions:- Leaves are pre-hashed (32-byte values)
- Partial support for RFC 6962 for internal nodes
- You should constrain node values to your finite field before insertion
No Standard Library by Default
No Standard Library by Default
The core algorithm is
no_std compatible. Only the storage feature requires std.Minimal Dependencies
Minimal Dependencies
Approximately 65 dependencies (active + transitive, excluding dev deps). The core algorithm has zero dependencies.
Performance Characteristics
Throughput
- Single-threaded: Best performance characteristic in terms of variance, useful when predictability under load is a constraint
- Parallel (with
parallelfeature): Up to ~190M leaves/sec peak throughput - Trade-off: Performance vs. predictability
Capacity Examples
With persistent storage and tiering:- N=4, MAX_DEPTH=16: ~4.3B nullifiers in 41 GiB
- N=8, MAX_DEPTH=10: ~1B nullifiers in 37 GiB
In most cases, you just need the tree in memory without crash persistence (as long as there is a bootstrapping sync mechanism). Use the single-threaded variant for better performance with low insertion rates.
Tuning Parameters
Compile-Time Constants
Defined insrc/tree.rs:
| Constant | Default | Description |
|---|---|---|
CHUNK_SIZE | 128 | Hashes per chunk for structural sharing. 128 × 32 bytes = 4 KB per chunk |
CHUNKS_PER_SEGMENT | 256 | Chunks per immutable segment. Controls Arc slab size |
PAR_CHUNK_SIZE | 64 | Parents per rayon work unit (with parallel feature) |
MAX_FRAME_PAYLOAD | 128 MB | Maximum WAL/checkpoint frame payload size |
Runtime Environment Variables
ROTORTREE_PARALLEL_THRESHOLD(default: 1024): Minimum parent count before rayon parallelism kicks in (withparallelfeature)
Next Steps
Lean IMT Design
Learn about the n-ary lean incremental Merkle tree algorithm
Tree Structure
Understand structural sharing with chunks and segments
Storage Modes
Explore in-memory vs. persistent storage with tiering
Proof Generation
Generate and verify inclusion and consistency proofs
