Config struct controls how the embedded database is initialized, including worker thread count and shard partitioning.
Config Struct
Default Configuration
shard_count to the number of available hardware threads, which is a good starting point for most workloads. If the system’s parallelism cannot be determined, it defaults to 4 shards.
Configuration Options
Shard Count
Theshard_count field determines how many worker threads the database spawns. Each shard:
- Owns an independent partition of the key-space
- Runs on its own OS thread
- Processes commands without locks (shared-nothing architecture)
- Maintains its own memory and data structures
- Higher shard count: Better parallelism for multi-threaded workloads, but more memory overhead
- Lower shard count: Less memory overhead, but may bottleneck under high concurrency
- Recommended: Start with the default (number of CPU cores) and tune based on your workload
Usage Examples
Using Default Configuration
Custom Shard Count
Low-Resource Configuration
High-Throughput Configuration
Dynamic Configuration Based on System
Testing Configuration
Multi-Threaded Access
TheDatabase struct is thread-safe and can be wrapped in an Arc for shared access across threads:
Architecture Notes
Shared-Nothing Design
Each shard worker thread:- Receives commands via an
mpscchannel from the calling thread - Processes commands without locks using
Rc<RefCell<>>for internal state - Replies via a
oneshotchannel back to the caller - Owns its data completely, with no shared memory between shards
- Lock-free execution on the data path
- Linear scalability with CPU core count
- Predictable performance without lock contention
Key Distribution
Keys are distributed to shards using a hash function. The hash is computed once when a command is dispatched, and the command is sent to the appropriate shard worker. This ensures:- Deterministic routing: The same key always goes to the same shard
- Even distribution: Keys are spread uniformly across shards
- Local processing: Each shard handles its partition independently
Memory Partitioning
Each shard maintains its own:- Hash tables for key-value storage
- Lists, hashes, and sets
- Vector indexes (HNSW)
- TTL expiration tracking
- Statistics and metrics