Type Definition
LeanIMT with persistence via write-ahead logging (WAL) and optional checkpointing. Insertions are buffered in memory and periodically flushed to disk. Checkpoints materialize the tree state to data files, enabling WAL truncation and mmap-backed reads.
Generic Parameters
Hash function implementing the
Hasher traitBranching factor (compile-time constant, must be ≥ 2)
Maximum tree depth (compile-time constant, must be ≥ 1)
Requires the
storage feature to be enabled.Configuration
RotorTreeConfig
Directory path where WAL and data files are stored
Controls when WAL entries are fsynced to disk
Controls when checkpoints are triggered
Controls which tree levels are kept in memory vs mmap’d
Whether to recompute the Merkle root on recovery to detect corruption beyond CRC
FlushPolicy
Fsync on a periodic interval (default: 10ms)
Manual
Caller controls flushing via
flush()CheckpointPolicy
Manual
Checkpoints only when explicitly requested via
checkpoint()OnClose
Checkpoint automatically when the tree is closed
Checkpoint after every N WAL entries
Checkpoint when uncheckpointed memory exceeds N bytes
TieringConfig
Levels at or above this index are kept in memory; lower levels are mmap’d from checkpoint files
Memory-mapping lower tree levels reduces resident memory for large trees while maintaining fast access to hot data at upper levels.
Constructor
open
RotorTree at the given path. If data files from a previous checkpoint exist, loads state from them and replays only the WAL delta. Otherwise, replays the full WAL.
The hash function to use
Configuration for persistence behavior
A new or recovered tree instance
RotorTreeError::Storage(StorageError::FileLocked)if another process holds the lockRotorTreeError::Storage(StorageError::Io(_))for I/O failuresRotorTreeError::Tree(_)for tree operation failures during recovery
Example
Methods
insert
The 32-byte hash to insert
A tuple of the new root hash and a durability token for waiting on flush
Example
The durability token allows callers to wait for the insertion to be fsynced to disk without blocking the insertion path.
insert_many
Slice of 32-byte hashes to insert
A tuple of the new root hash and a durability token
Example
insert_durable
The 32-byte hash to insert
The new root hash after the insertion is fsynced
Example
root
None if the tree is empty.
The 32-byte root hash
size
The leaf count
depth
The tree depth
snapshot
An
Arc-wrapped snapshotExample
flush
Ok on success, or an error if flush fails
Example
With
FlushPolicy::Interval, flush is called automatically on a background thread. Manual flush is useful with FlushPolicy::Manual or when you need immediate durability.checkpoint
Ok on success, or an error if checkpointing fails
Example
Checkpointing is I/O-intensive. Automatic checkpointing (via
CheckpointPolicy) runs on a background thread to avoid blocking insertions.wait_for_checkpoint
Maximum time to wait
true if a checkpoint completed within the timeout, false otherwiseExample
close
CheckpointPolicy::OnClose).
Ok on success, or an error if close operations fail
Example
close is also called automatically in the Drop implementation, but explicit close allows handling errors.Error Handling
RotorTreeError
StorageError
Recovery
Onopen, RotorTree:
- Acquires an exclusive file lock on the WAL
- Reads the latest checkpoint metadata (if any)
- Memory-maps checkpointed tree levels
- Replays WAL entries since the checkpoint
- Optionally recomputes the root if
verify_checkpointis true
Recovery Example
Performance Tips
Use insert_many for Batches
Batch insertions with
insert_many are significantly faster than repeated insert calls, especially with the parallel feature.Configure Flush Policy
FlushPolicy::Interval(Duration::from_millis(100)) provides a good balance between durability and throughput. Increase the interval for higher write throughput at the cost of larger unflushed windows.Tune Checkpoint Policy
CheckpointPolicy::MemoryThreshold(100_000_000) (100MB) limits memory growth while avoiding excessive checkpoint overhead. Adjust based on your memory budget and recovery time tolerance.Use Tiering for Large Trees
Set
pin_above_level to keep hot data in memory and mmap cold data. For a binary tree with billions of leaves, pin_above_level: 5 keeps the top 5 levels in RAM (~64 nodes) while lower levels are mmap’d.