Skip to main content
Durable Objects storage in workerd provides transactional, persistent key-value storage backed by either an RPC interface or SQLite. Understanding the internals helps you optimize performance and understand consistency guarantees.

Storage backends

workerd supports two storage implementations:

ActorCache (RPC-backed)

An in-memory write-back cache layer over the ActorStorage::Stage RPC interface:
  • LRU eviction policy
  • Write batching and coalescing
  • Optimistic local caching
  • Maximum operation size: 16 MiB
Location: src/workerd/io/actor-cache.{h,c++}

ActorSqlite (SQLite-backed)

A SQLite-based implementation providing synchronous access:
  • Direct SQLite database operations
  • Automatic transaction management
  • Implicit batching of operations without awaits
  • Full SQL query support through the DO SQL API
Location: src/workerd/io/actor-sqlite.{h,c++}

Cache architecture

Entry states

Each cache entry has two status dimensions: Value status:
  • PRESENT: Entry exists with a value
  • ABSENT: Entry is known not to exist
  • UNKNOWN: Entry state is not cached
Sync status:
  • CLEAN: Value matches persistent storage
  • DIRTY: Local modifications not yet flushed
  • NOT_IN_CACHE: Entry is no longer cached

Memory management

The cache uses a shared LRU across all Durable Objects in a process:
// Default limits (configurable)
softLimit = 32 MiB        // Target cache size
hardLimit = 64 MiB        // Maximum before eviction
dirtyListByteLimit = 8 MiB // Backpressure threshold
When the cache exceeds limits:
  1. Soft limit exceeded: Clean entries are evicted (LRU)
  2. Hard limit exceeded: Throws an exception and resets the isolate
  3. Dirty limit exceeded: Applies backpressure until writes flush
Exceeding the hard limit terminates all Durable Objects in the isolate. Monitor your cache usage carefully when performing large read or write operations.

Write batching

workerd automatically batches writes to improve performance:

Flush triggers

Writes are flushed when:
  • The output gate needs to confirm a response
  • 2 seconds elapse since the first write in the batch
  • The dirty list exceeds the byte limit
  • The application explicitly calls storage.sync()

Batching example

// These writes are automatically batched into a single transaction
await storage.put("key1", "value1");
await storage.put("key2", "value2");
await storage.put("key3", "value3");
// Single flush to disk
// These writes are NOT batched (awaits in between)
await storage.put("key1", "value1");
await someOtherOperation();
await storage.put("key2", "value2");
// Two separate flushes

SQLite backend details

Automatic transactions

The SQLite backend automatically wraps operations without intervening awaits into a single transaction:
// Automatic implicit transaction
export default {
  async fetch(request, env, ctx) {
    // All these operations execute in one transaction
    await state.storage.put("a", 1);
    await state.storage.put("b", 2);
    await state.storage.put("c", 3);
    // Commit happens automatically when you await
    
    return new Response("OK");
  }
};

Explicit transactions

For complex atomic operations:
const txn = await state.storage.transaction(async (txn) => {
  const current = await txn.get("counter");
  await txn.put("counter", current + 1);
  await txn.put("lastUpdate", Date.now());
  // All or nothing - atomic commit
});
Explicit transactions are recommended for operations that read and then write based on the read value, ensuring atomicity.

Storage options

Read options

const value = await storage.get("key", {
  noCache: true  // Don't cache the result
});

Write options

await storage.put("key", "value", {
  allowUnconfirmed: true,  // Don't wait for disk confirmation
  noCache: true            // Evict after writing
});

Consistency guarantees

Input gate

Controls concurrent request handling:
  • Ensures requests see a consistent view of state
  • Can be broken by critical section failures
  • Location: src/workerd/io/io-gate.{h,c++}

Output gate

Blocks outgoing responses:
  • Responses wait for pending writes to flush
  • Prevents returning results based on unconfirmed writes
  • allowUnconfirmed option bypasses this wait

Example: Consistency in action

export default {
  async fetch(request, env, ctx) {
    // Write happens immediately in cache
    await state.storage.put("key", "value");
    
    // Read sees the new value (from cache)
    const value = await state.storage.get("key");
    
    // Response waits for write to flush to disk (output gate)
    return new Response(value);
  }
};

Performance considerations

Optimize read patterns

// Bad: Reads keys one at a time
for (const key of keys) {
  const value = await storage.get(key);
  // process value
}

// Good: Batch read operation
const values = await storage.get(keys);
for (const [key, value] of values) {
  // process value
}

Optimize write patterns

// Bad: Multiple small flushes
for (const [key, value] of entries) {
  await storage.put(key, value);
  await someAsyncOperation(); // Causes flush
}

// Good: Batch writes before async operations
const pairs = entries.map(([k, v]) => ({ key: k, value: v }));
await storage.put(pairs);
await someAsyncOperation();

Use list() efficiently

// Use limits to control memory usage
const results = await storage.list({
  start: "prefix:",
  limit: 1000  // Prevent loading too much into cache
});

Limits and constraints

LimitValueNotes
Maximum key size2 KiBKeys are UTF-8 strings
Maximum value size128 KiBValues are raw bytes
Maximum operation size16 MiBTotal size of single RPC request
Cache soft limit32 MiBDefault, configurable
Cache hard limit64 MiBDefault, isolate reset on exceed
Dirty list limit8 MiBBackpressure applied when exceeded

Local development

For local workerd instances:
# Storage is written to local disk
workerd serve config.capnp
# Data stored in .workerd/state/ directory
Local storage currently always runs on the same machine that requested the Durable Object. In production, you would typically want Durable Objects distributed across machines.

Debugging tips

  1. Monitor cache size: Large list operations can exceed cache limits
  2. Check flush timing: Use storage.sync() to force flushes for testing
  3. Review transaction boundaries: Ensure atomic operations are in explicit transactions
  4. Profile write batching: Unnecessary awaits between writes reduce performance

Build docs developers (and LLMs) love