Skip to main content
The zero-cache package provides the server-side cache and sync engine that maintains a SQLite replica of your PostgreSQL database and handles real-time synchronization with clients.

Starting zero-cache

Command Line

Start the zero-cache server:
npx zero-cache

Programmatic Usage

import runWorker from 'zero-cache/src/server/runner/main';
import {parentWorker} from 'zero-cache/src/types/processes';

await runWorker(parentWorker, process.env);

Configuration

Configure zero-cache via environment variables or command-line flags. All flags can be set using the ZERO_ prefix.

Core Options

upstream-db
string
required
The upstream PostgreSQL database connection string.
ZERO_UPSTREAM_DB=postgres://user:pass@localhost:5432/mydb
port
number
default:"4848"
The port for sync connections.
ZERO_PORT=4848
app-id
string
default:"zero"
Unique identifier for the app. Multiple zero-cache apps can run on a single upstream database, each isolated with its own permissions and metadata.Must contain only lowercase letters, numbers, and underscores.
ZERO_APP_ID=myapp

Replication Options

app-publications
string[]
PostgreSQL publications that define the tables and columns to replicate. Publication names may not begin with an underscore.If unspecified, zero-cache creates an internal publication for all tables in the public schema.
ZERO_APP_PUBLICATIONS=my_publication,another_publication
replica-file
string
default:"zero.db"
File path to the SQLite replica that zero-cache maintains.
ZERO_REPLICA_FILE=/data/zero.db
replica-vacuum-interval-hours
number
Performs a VACUUM at server startup if the specified number of hours has elapsed since the last VACUUM. The VACUUM operation requires double the database size in disk space.
ZERO_REPLICA_VACUUM_INTERVAL_HOURS=168

Database Connections

upstream-max-conns
number
default:"20"
Maximum number of connections to the upstream database for committing mutations. Divided evenly among sync workers.Must allow at least one connection per sync worker.
ZERO_UPSTREAM_MAX_CONNS=20
cvr-db
string
PostgreSQL database for storing CVRs (client view records). CVRs track data synced to clients to determine diffs on reconnect.If unspecified, uses upstream-db.
ZERO_CVR_DB=postgres://user:pass@localhost:5432/cvr_db
cvr-max-conns
number
default:"30"
Maximum number of connections to the CVR database. Divided evenly among sync workers.
ZERO_CVR_MAX_CONNS=30
change-db
string
PostgreSQL database for storing recent replication log entries to sync multiple view-syncers without requiring multiple replication slots.If unspecified, uses upstream-db.
ZERO_CHANGE_DB=postgres://user:pass@localhost:5432/change_db

Performance Options

num-sync-workers
number
Number of processes to use for view syncing. Leave unset to use maximum available parallelism.Set to 0 to run as a replication-manager without sync workers.
ZERO_NUM_SYNC_WORKERS=4
yield-threshold-ms
number
default:"10"
Maximum time in milliseconds a sync worker spends in IVM before yielding to the event loop. Lower values increase responsiveness at the cost of reduced throughput.
ZERO_YIELD_THRESHOLD_MS=10
enable-query-planner
boolean
default:"true"
Enable the query planner for optimizing ZQL queries. The query planner determines the most efficient join strategies.
ZERO_ENABLE_QUERY_PLANNER=true

CVR Garbage Collection

cvr-garbage-collection-inactivity-threshold-hours
number
default:"48"
Duration after which an inactive CVR is eligible for garbage collection.
ZERO_CVR_GARBAGE_COLLECTION_INACTIVITY_THRESHOLD_HOURS=48
cvr-garbage-collection-initial-interval-seconds
number
default:"60"
Initial interval for checking and garbage collecting inactive CVRs. Increased exponentially (up to 16 minutes) when nothing to purge.
ZERO_CVR_GARBAGE_COLLECTION_INITIAL_INTERVAL_SECONDS=60
cvr-garbage-collection-initial-batch-size
number
default:"25"
Initial number of CVRs to purge per interval. Increased linearly if new CVRs exceed purged CVRs.Set to 0 to disable CVR garbage collection.
ZERO_CVR_GARBAGE_COLLECTION_INITIAL_BATCH_SIZE=25

Rate Limiting

per-user-mutation-limit-max
number
Maximum mutations per user within the specified window. If unset, no rate limiting is enforced.
ZERO_PER_USER_MUTATION_LIMIT_MAX=1000
per-user-mutation-limit-window-ms
number
default:"60000"
Sliding window in milliseconds over which the mutation limit is enforced.
ZERO_PER_USER_MUTATION_LIMIT_WINDOW_MS=60000

Multi-Node Setup

change-streamer-uri
string
URI of the change-streamer. In multi-node setups, view-syncers should point to the replication-manager URI (runs on port 4849).
ZERO_CHANGE_STREAMER_URI=ws://replication-manager:4849
change-streamer-mode
'dedicated' | 'discover'
default:"dedicated"
Alternative to change-streamer-uri. Set to discover to connect to the IP address registered by the replication-manager.Ignored if change-streamer-uri is set.
ZERO_CHANGE_STREAMER_MODE=discover
change-streamer-port
number
Port on which the change-streamer runs. Internal protocol between replication-manager and view-syncers.If unspecified, defaults to port + 1.
ZERO_CHANGE_STREAMER_PORT=4849

Development Options

lazy-startup
boolean
Delay zero-cache startup until the first client connection. Useful for development.
ZERO_LAZY_STARTUP=true
query-hydration-stats
boolean
Track and log the number of rows considered by slow query hydrations. Useful for debugging and performance tuning.
ZERO_QUERY_HYDRATION_STATS=true
task-id
string
Globally unique identifier for the zero-cache instance. Useful for debugging.If unspecified, attempts to extract the TaskARN from AWS ECS or uses a random string.
ZERO_TASK_ID=my-task-123

Environment Variables

All command-line flags can be set via environment variables with the ZERO_ prefix:
# Command-line flag
npx zero-cache --port=4848

# Equivalent environment variable
ZERO_PORT=4848 npx zero-cache

Production Deployment

For production deployments:
  1. Set explicit database connections for upstream-db, cvr-db, and change-db
  2. Configure rate limiting with per-user-mutation-limit-max
  3. Enable CVR garbage collection with appropriate thresholds
  4. Set connection pool sizes based on your workload
  5. Use a persistent volume for replica-file
  6. Configure multi-node setup if running multiple instances

Build docs developers (and LLMs) love