Rotation creates a new active key record and deactivates the old key. Old keys remain available for decrypt compatibility. Inactive keys are rejected for encryption.
Source: src/keystore/postgres.rs:68-98The Postgres implementation uses transactions to ensure atomicity:
fn rotate_key(&self, key_id: Uuid) -> Result<KeyMetadata> { let new_id = Uuid::new_v4(); let material = generate_key_material(); let rt = tokio::runtime::Handle::current(); let metadata = rt.block_on(async { let mut tx = self.pool.begin().await?; // 1. Fetch lineage and version from old key let row = sqlx::query("SELECT lineage_id, version FROM keys WHERE key_id=$1") .bind(key_id) .fetch_optional(&mut *tx) .await? .ok_or(QimemError::KeyNotFound(key_id))?; let lineage_id: Uuid = row.try_get("lineage_id")?; let version: i32 = row.try_get("version")?; // 2. Deactivate old key sqlx::query("UPDATE keys SET active=false WHERE key_id=$1") .bind(key_id) .execute(&mut *tx) .await?; // 3. Insert new active key sqlx::query("INSERT INTO keys (key_id, lineage_id, version, active, material) VALUES ($1,$2,$3,$4,$5)") .bind(new_id) .bind(lineage_id) .bind(version + 1) .bind(true) .bind(material.as_slice()) .execute(&mut *tx) .await?; tx.commit().await?; Ok::<KeyMetadata, QimemError>(KeyMetadata { key_id: new_id, lineage_id, version: version + 1, active: true }) })?; Ok(metadata)}
If the transaction fails, the rotation is rolled back, leaving the old key still active. Always check the returned KeyMetadata to confirm the new key_id.
Source: src/crypto.rs:58-60Attempting to encrypt with an inactive key fails immediately:
if !key.active { return Err(QimemError::KeyInactive(key.key_id));}
Example:
let key_v1 = store.get_key(metadata_v1.key_id)?;// After rotation, key_v1.active == falselet result = engine.encrypt(&key_v1, b"new data");assert!(matches!(result, Err(QimemError::KeyInactive(_))));
// Decrypt with old keylet plaintext = engine.decrypt(&key_v1, &old_envelope)?;// Re-encrypt with new keylet new_envelope = engine.encrypt(&key_v2, &plaintext)?;
Re-encryption requires decrypting all data, which may impact performance for large datasets. Consider batch processing with rate limiting.