Skip to main content

Overview

In a distributed peer-to-peer system, multiple users can modify the same data simultaneously. GenosDB’s conflict resolution system ensures data consistency and integrity across all peers using a Last-Write-Wins (LWW) strategy enhanced by Hybrid Logical Clocks (HLC).
Conflict resolution happens automatically and transparently. Developers don’t need to manually handle conflicts in most cases.

The Challenge of Distributed Conflicts

Consider this scenario:
// Peer A (offline) updates a user profile
await db.put({ name: 'Alice', status: 'busy' }, 'user123')

// Peer B (offline) updates the same profile
await db.put({ name: 'Alice', status: 'available' }, 'user123')

// Both peers come online and sync - which version wins?
Without a central server to serialize writes, we need a deterministic way to resolve such conflicts.

Hybrid Logical Clock (HLC)

The foundation of GenosDB’s conflict resolution is the Hybrid Logical Clock, which combines physical time with logical counters to create causally-ordered timestamps.

HLC Components

Physical Component

Wall-clock time from the system’s local clock, keeping timestamps aligned with real-world time.Used for primary ordering of events.

Logical Component

Sequential counter that acts as a tie-breaker for events occurring within the same millisecond.Preserves “happens-before” causality.

HLC Structure

{
  physical: 1709587200000,  // Unix timestamp in milliseconds
  logical: 5                // Logical counter for same-millisecond events
}

How HLC Timestamps Work

Local Timestamp Generation

When a local operation occurs (e.g., put, remove, link):
1

Ensure Monotonicity

The physical component is set to the maximum of:
  • Current system time
  • Previous timestamp’s physical time
This ensures time never moves backward, even if the system clock is adjusted.
2

Increment Logical Counter

If the physical time matches the previous timestamp, increment the logical counter.Otherwise, reset the logical counter to 0.
3

Assign to Operation

The new HLC timestamp is assigned to the operation and stored with the node.
// Example: Rapid successive operations
await db.put({ value: 1 }, 'node1')  // { physical: 1000, logical: 0 }
await db.put({ value: 2 }, 'node2')  // { physical: 1000, logical: 1 }
await db.put({ value: 3 }, 'node3')  // { physical: 1000, logical: 2 }

Clock Synchronization with Remote Events

When a node receives data from a peer:
1

Inspect Remote Timestamp

Extract the HLC timestamp from the incoming operation.
2

Advance Physical Component

Update local clock’s physical time to the maximum of:
  • Current local time
  • Remote timestamp’s physical time
3

Update Logical Component

Adjust the logical counter to ensure the next local timestamp will be causally after the remote event.
This synchronization protocol propagates causal information through the network, ensuring all peers converge toward a consistent view of event ordering.

Last-Write-Wins (LWW) Resolution

When concurrent updates to the same node are detected, GenosDB uses LWW with HLC timestamps to resolve the conflict deterministically.

Timestamp Comparison Logic

Two HLC timestamps are compared using lexicographical ordering:
1

Compare Physical Components

The timestamp with the greater physical value wins.
timestampA = { physical: 1000, logical: 5 }
timestampB = { physical: 1001, logical: 0 }

// timestampB wins (1001 > 1000)
2

Compare Logical Components (if physical tied)

If physical components are equal, the timestamp with the greater logical value wins.
timestampA = { physical: 1000, logical: 5 }
timestampB = { physical: 1000, logical: 3 }

// timestampA wins (5 > 3)

Resolution Process

When an incoming update targets existing local data:
1

Validate Timestamp

Check for unreasonably future timestamps (clock skew protection).
2

Compare Timestamps

Use lexicographical comparison:
if (incoming.timestamp > local.timestamp) {
  // Incoming wins - accept update
} else {
  // Local wins - discard incoming
}
3

Apply or Discard

  • Incoming wins: Overwrite local data, sync local clock
  • Local wins: Discard incoming update, keep local data

Clock Skew Protection

Misconfigured system clocks can disrupt distributed ordering. GenosDB implements safeguards:

Future Drift Limit

Timestamps unreasonably far in the future are capped at a maximum acceptable drift (default: 2 hours):
const MAX_FUTURE_DRIFT = 2 * 60 * 60 * 1000  // 2 hours in milliseconds

if (incoming.physical > currentTime + MAX_FUTURE_DRIFT) {
  // Cap the physical component
  incoming.physical = currentTime + MAX_FUTURE_DRIFT
  // Preserve logical component for ordering
}
This prevents a single misconfigured peer from corrupting the temporal ordering of the entire system.

Conflict Resolution Examples

Example 1: Concurrent Updates from Different Peers

// Initial state: { name: 'Alice', age: 30 }
// Node ID: 'user123'
// Timestamp: { physical: 1000, logical: 0 }

// === Peer A (offline) ===
await db.put({ name: 'Alice', age: 31 }, 'user123')
// Local timestamp: { physical: 2000, logical: 0 }

// === Peer B (offline) ===
await db.put({ name: 'Alice', age: 32 }, 'user123')
// Local timestamp: { physical: 1500, logical: 0 }

// === Peers come online and sync ===
// Peer A's update wins (2000 > 1500)
// Final state on both peers: { name: 'Alice', age: 31 }

Example 2: Rapid Same-Millisecond Updates

// Single peer making rapid updates

await db.put({ count: 1 }, 'counter')
// Timestamp: { physical: 5000, logical: 0 }

await db.put({ count: 2 }, 'counter')
// Timestamp: { physical: 5000, logical: 1 }

await db.put({ count: 3 }, 'counter')
// Timestamp: { physical: 5000, logical: 2 }

// Logical counter ensures correct ordering
// Final state: { count: 3 }

Example 3: Clock Skew Scenario

// Peer A has correct time: 10:00:00 AM
// Peer B has clock set to: 10:00:00 PM (12 hours ahead)

// Peer B creates update
await db.put({ status: 'future' }, 'node1')
// Timestamp: { physical: 1709630400000, logical: 0 }  // 10 PM

// Peer A receives the update
// GenosDB caps the timestamp to MAX_FUTURE_DRIFT
// Adjusted timestamp: { physical: 1709587200000, logical: 0 }  // 12 PM (2hr drift)

// System remains stable despite clock skew

Integration with P2P Sync

Conflict resolution is seamlessly integrated into the synchronization pipeline:
1

Receive Operation

Peer receives a put or link operation from the network.
2

Extract Timestamp

Extract the HLC timestamp from the operation.
3

Resolve Conflict

Compare with local node’s timestamp (if exists).
4

Apply or Discard

If incoming wins, update the graph and sync the local HLC.
5

Maintain Causality

Clock synchronization ensures future local operations are causally after this event.

Custom Conflict Resolution

For advanced use cases, you can provide a custom conflict resolver:
const db = await gdb('my-app', {
  rtc: true,
  resolveConflict: (localNode, remoteNode) => {
    // Custom logic: merge values instead of replacing
    return {
      ...localNode.value,
      ...remoteNode.value,
      // Keep the later timestamp
      _timestamp: remoteNode.timestamp > localNode.timestamp 
        ? remoteNode.timestamp 
        : localNode.timestamp
    }
  }
})
Custom conflict resolvers must maintain commutativity (order-independent results) and idempotency (same result when applied multiple times) to ensure eventual consistency.

Eventual Consistency Guarantees

GenosDB provides strong eventual consistency:

Determinism

All peers resolve conflicts identically, guaranteeing convergence to the same state.

Causality Preservation

If operation A happens before operation B, A’s effects are visible before B’s at every peer.

Progress

The system always makes forward progress, even during network partitions.

Convergence

Once all operations have propagated, all peers have identical state.

Limitations and Trade-offs

Last-Write-Wins Trade-offs

Data Loss: LWW can discard concurrent updates. If two users edit different fields simultaneously, one user’s changes may be lost.
// Initial: { name: 'Alice', age: 30, city: 'NYC' }

// User A updates age (timestamp: 1000)
await db.put({ name: 'Alice', age: 31, city: 'NYC' }, 'user1')

// User B updates city (timestamp: 999)
await db.put({ name: 'Alice', age: 30, city: 'SF' }, 'user1')

// Result: User A's entire update wins
// Final: { name: 'Alice', age: 31, city: 'NYC' }
// User B's city change is lost!
Solution: Design your data model with granular nodes:
// Better: Separate nodes for each attribute
await db.put({ age: 31 }, 'user1:age')
await db.put({ city: 'SF' }, 'user1:city')

// Now both updates can coexist

Clock Dependency

HLC relies on reasonably synchronized physical clocks:
  • Best Case: Clocks within seconds of each other
  • Acceptable: Clocks within the drift limit (2 hours)
  • Problematic: Clocks beyond drift limit may cause unexpected ordering
Most modern devices sync with NTP servers, making clock skew rare in practice.

Best Practices

1. Granular Data Modeling

// ❌ Avoid: Large objects with multiple fields
await db.put({
  name: 'Alice',
  age: 30,
  email: '[email protected]',
  bio: 'Software engineer',
  preferences: { theme: 'dark', lang: 'en' }
}, 'user1')

// ✅ Better: Split into focused nodes
await db.put({ name: 'Alice' }, 'user1:profile:name')
await db.put({ age: 30 }, 'user1:profile:age')
await db.put({ email: '[email protected]' }, 'user1:profile:email')
await db.put({ bio: 'Software engineer' }, 'user1:profile:bio')
await db.put({ theme: 'dark', lang: 'en' }, 'user1:preferences')

2. Understand LWW Semantics

LWW is ideal for:
  • User profiles
  • Configuration settings
  • Status updates
  • Non-critical collaborative data
Avoid LWW for:
  • Financial transactions (use append-only logs)
  • Inventory counts (use CRDTs like counters)
  • Collaborative text editing (use OT or CRDTs)

3. Design for Idempotency

// ✅ Idempotent: Same result if applied multiple times
await db.put({ status: 'active' }, 'user1:status')

// ❌ Non-idempotent: Result depends on order and frequency
await db.put({ count: currentCount + 1 }, 'counter')

4. Monitor Clock Skew

While GenosDB handles skew gracefully, monitoring can help:
// Check a node's timestamp
const { result } = await db.get('node1')
const nodeTime = result.timestamp.physical
const localTime = Date.now()

if (Math.abs(nodeTime - localTime) > 60000) {
  console.warn('Significant clock skew detected:', nodeTime - localTime, 'ms')
}

P2P Sync

Understand how conflicts are detected during synchronization

CRUD Operations

Learn how put and remove operations generate timestamps

Real-Time Subscriptions

See how conflict resolution affects live query results

Todo App Example

See conflict resolution in action with a practical example

Build docs developers (and LLMs) love