Skip to main content
GenosDB is a distributed, peer-to-peer graph database built with a modular, layered architecture designed for performance, security, and scalability. This page provides a high-level overview of the system’s components and how they work together.

Core Components

Storage Layer

High-performance OPFS worker with tiered fallback

Sync Engine

Intelligent hybrid delta and full-state synchronization

Security Manager

Zero-trust RBAC with cryptographic identity

P2P Network

WebRTC mesh with Nostr signaling

System Architecture

Data Flow

Write Operations

When you write data to GenosDB, the following happens:
  1. Application Layer: Call db.put(data)
  2. Security Check: If Security Manager is enabled, verify user has write permission
  3. Timestamp Generation: Hybrid Logical Clock assigns a causal timestamp
  4. Local Write: Data is written to in-memory graph with timestamp
  5. Operation Logging: Write operation is appended to the OpLog
  6. Persistence: OPFS Worker saves the graph state to disk (debounced)
  7. Network Broadcast: If RTC enabled, operation is signed and broadcast to peers
  8. Peer Reception: Remote peers verify signature, check permissions, and apply via CRDT conflict resolution
// Example write with full data flow
const db = await gdb('mydb', { 
  rtc: true,
  sm: { superAdmins: ['0x...'] }
});

// This triggers the entire flow described above
const id = await db.put({ 
  title: 'Hello World',
  timestamp: Date.now()
});
The OPFS Worker runs on a separate thread, ensuring write operations never block the UI.

Read Operations

Reads are optimized for speed:
  1. Query Parsing: Parse query parameters and filters
  2. In-Memory Lookup: Retrieve data directly from the in-memory graph
  3. Index Utilization: Use radix tree indexes if available for fast filtering
  4. Graph Traversal: For $edge queries, recursively traverse relationships
  5. Permission Filtering: If ACLs enabled, filter nodes based on user permissions
  6. Return Results: Stream results to the callback or return promise
// Fast in-memory reads
const { result } = await db.get('node-id');

// Reactive subscriptions
const { unsubscribe } = await db.map(({ id, value }) => {
  console.log('Node updated:', id, value);
}, {
  query: { status: 'active' },
  order: 'desc',
  field: 'timestamp'
});

Synchronization Flow

1

Peer Connection

New peer connects to the network via Nostr relays and establishes WebRTC connections
2

Sync Handshake

Peer sends its last known timestamp (globalTimestamp) to others
3

Delta Calculation

If timestamp is recent, receiving peer calculates delta from OpLog and sends compressed operations
4

Fallback Detection

If timestamp is too old or null, full state sync is triggered
5

Conflict Resolution

All incoming operations pass through HLC-based Last-Write-Wins resolution
6

State Convergence

Peer’s local state converges with the network, achieving eventual consistency

Key Design Principles

1. Local-First Architecture

GenosDB prioritizes local performance:
  • All reads from in-memory graph (microsecond latency)
  • Writes are synchronous to memory, asynchronous to disk
  • Network sync happens in the background
  • Works fully offline with automatic sync on reconnection

2. Eventual Consistency

Using CRDTs and hybrid logical clocks:
  • All peers converge to the same state eventually
  • Conflicts are resolved deterministically using Last-Write-Wins
  • No central coordinator needed
  • Network partitions are handled gracefully

3. Zero-Trust Security

Every operation is verified:
  • Cryptographic signatures prove authenticity
  • Role-based permissions enforce authorization
  • No operation is trusted by default
  • Security rules are embedded in the graph itself

4. Modular Extension

Core remains lightweight:
  • Optional modules (SM, ACLs, Geo, NLQ) load on demand
  • Dynamic imports reduce initial bundle size
  • Custom modules can extend functionality
  • Clean API boundaries between layers

Performance Characteristics

OperationLatencyThroughput
In-memory read~10μsMillions/sec
In-memory write~50μs50,000+/sec
OPFS persistence~5ms (debounced)Batch writes
Local tab sync~1msNear-instant
P2P delta sync~50-200msNetwork-dependent
Full state sync~500ms-2sSize-dependent
Performance varies based on data size, network conditions, and hardware capabilities.

Scalability

Data Scale

  • Nodes: Tested with 100K+ nodes per database
  • Graph depth: Efficient traversal up to 10+ hops with $edge
  • Operations/sec: 50,000+ sustained writes without UI blocking

Network Scale

  • Traditional Mesh: Recommended for < 100 peers
  • Cellular Mesh: Scales to 10,000+ peers with O(N) connection complexity
  • Cross-tab sync: Unlimited tabs via BroadcastChannel

Worker Architecture

Deep dive into the persistence layer

Hybrid Delta Protocol

How synchronization works

Hybrid Logical Clock

Conflict resolution internals

GenosRTC Architecture

P2P networking layer

Build docs developers (and LLMs) love