Core Components
Storage Layer
High-performance OPFS worker with tiered fallback
Sync Engine
Intelligent hybrid delta and full-state synchronization
Security Manager
Zero-trust RBAC with cryptographic identity
P2P Network
WebRTC mesh with Nostr signaling
System Architecture
Data Flow
Write Operations
When you write data to GenosDB, the following happens:- Application Layer: Call
db.put(data) - Security Check: If Security Manager is enabled, verify user has write permission
- Timestamp Generation: Hybrid Logical Clock assigns a causal timestamp
- Local Write: Data is written to in-memory graph with timestamp
- Operation Logging: Write operation is appended to the OpLog
- Persistence: OPFS Worker saves the graph state to disk (debounced)
- Network Broadcast: If RTC enabled, operation is signed and broadcast to peers
- Peer Reception: Remote peers verify signature, check permissions, and apply via CRDT conflict resolution
The OPFS Worker runs on a separate thread, ensuring write operations never block the UI.
Read Operations
Reads are optimized for speed:- Query Parsing: Parse query parameters and filters
- In-Memory Lookup: Retrieve data directly from the in-memory graph
- Index Utilization: Use radix tree indexes if available for fast filtering
- Graph Traversal: For
$edgequeries, recursively traverse relationships - Permission Filtering: If ACLs enabled, filter nodes based on user permissions
- Return Results: Stream results to the callback or return promise
Synchronization Flow
Peer Connection
New peer connects to the network via Nostr relays and establishes WebRTC connections
Delta Calculation
If timestamp is recent, receiving peer calculates delta from OpLog and sends compressed operations
Key Design Principles
1. Local-First Architecture
GenosDB prioritizes local performance:- All reads from in-memory graph (microsecond latency)
- Writes are synchronous to memory, asynchronous to disk
- Network sync happens in the background
- Works fully offline with automatic sync on reconnection
2. Eventual Consistency
Using CRDTs and hybrid logical clocks:- All peers converge to the same state eventually
- Conflicts are resolved deterministically using Last-Write-Wins
- No central coordinator needed
- Network partitions are handled gracefully
3. Zero-Trust Security
Every operation is verified:- Cryptographic signatures prove authenticity
- Role-based permissions enforce authorization
- No operation is trusted by default
- Security rules are embedded in the graph itself
4. Modular Extension
Core remains lightweight:- Optional modules (SM, ACLs, Geo, NLQ) load on demand
- Dynamic imports reduce initial bundle size
- Custom modules can extend functionality
- Clean API boundaries between layers
Performance Characteristics
| Operation | Latency | Throughput |
|---|---|---|
| In-memory read | ~10μs | Millions/sec |
| In-memory write | ~50μs | 50,000+/sec |
| OPFS persistence | ~5ms (debounced) | Batch writes |
| Local tab sync | ~1ms | Near-instant |
| P2P delta sync | ~50-200ms | Network-dependent |
| Full state sync | ~500ms-2s | Size-dependent |
Performance varies based on data size, network conditions, and hardware capabilities.
Scalability
Data Scale
- Nodes: Tested with 100K+ nodes per database
- Graph depth: Efficient traversal up to 10+ hops with
$edge - Operations/sec: 50,000+ sustained writes without UI blocking
Network Scale
- Traditional Mesh: Recommended for < 100 peers
- Cellular Mesh: Scales to 10,000+ peers with O(N) connection complexity
- Cross-tab sync: Unlimited tabs via BroadcastChannel
Related Pages
Worker Architecture
Deep dive into the persistence layer
Hybrid Delta Protocol
How synchronization works
Hybrid Logical Clock
Conflict resolution internals
GenosRTC Architecture
P2P networking layer