Motivation
The 3.x release adds support for any Redis-compatible KV database and introduces per-application key namespacing. Two main drivers:- Redis licensing change. Redis is no longer open source (now source-available), which restricts which versions and updates applications can use. Condo is moving toward Valkey as the primary KV backend.
- Cluster compatibility. The previous setup used Redis virtual databases (
/0,/1, etc.) to separate applications on a single host. Redis clusters do not support virtual databases. To support sharded clusters without running a separate instance per application, all apps now share one cluster with isolated key namespaces.
Breaking changes
1. @open-condo/keystone/redis renamed to @open-condo/keystone/kv
The module path has changed to decouple the package name from the underlying storage technology. The ioredis client library (MIT-licensed) is still used internally, so all changes remain compatible with Redis.
Update your imports:
2. KV storage keys are now prefixed with the app name
Keys written to KV storage are automatically prefixed with the application name derived frompackage.json in process.cwd(). The transformation follows these rules:
- Take the package name (e.g.
@app/resident-app) - Strip the scope (
resident-app) - Convert to snake_case (
resident_app)
ioredis handles the prefix transparently:
Bull queue keys now use hashtags to ensure cluster compatibility. For example:
{condo:bull:queue_name}:rest_of_key. This is handled automatically.Installing @open-condo/migrator
Before migrating existing KV data, install the migration tool.
- From npm
- Local build
Install the published package globally:
Migration options
Option 1: Local migration
Use this approach for local development environments.Stop all applications and workers
Shut down every running app and worker to prevent writes to KV storage during migration.
Option 2: Remote migration with downtime
Use this approach if you can tolerate a few minutes of downtime per application.Internal tests show the migrator can process approximately 3–4 million keys per minute. Use this figure to estimate downtime for your dataset.
Option 3: Minimal-downtime migration with RedisShake
Use this approach for environments with strict SLAs where even brief downtime is unacceptable. This is an advanced procedure. The strategy: spin up a second KV instance, use RedisShake to replicate all keys with the new prefixes in real time, then cut over to the new instance at release time.Provision a new KV instance
Set up a second Redis or Valkey instance that will become the target after migration.
Configure and run RedisShake
Follow the RedisShake setup instructions. Use the configuration below as a starting point, adjusting addresses and the Update
db_hash mapping to match your application-to-database-index mapping:db_hash to reflect your environment. For example, if condo uses database index 0 and address-service uses index 1, add [1] = "address_service:" to the table.Run RedisShake and wait for the initial full sync to complete:Remove the protection guard from the source database
Once RedisShake is running in real-time replication mode, run the following on the source database to remove the startup guard:RedisShake will replicate this key (with prefix) to the new database, allowing 3.x apps to start.
Deploy 3.x applications
Deploy the new application version pointing to the new KV instance. Update
KV_URL / REDIS_URL accordingly.