Caching exists at every layer of the stack: browser, CDN, reverse proxy, application (Redis), and database buffer pool. Each layer has different hit rate, invalidation complexity, and latency characteristics.
A cache hit at any layer prevents all lower layers from serving the request. Design caching strategies from the user outward: browser → CDN → reverse proxy → application → database.
// Request cache waterfall1. Browser cache: hit if within max-age2. CDN PoP: hit if within s-maxage3. Reverse proxy: Varnish full-page cache4. Redis: application key-value lookup5. DB read replica: indexed query6. DB primary: full query + write path// Each hit prevents all lower layers from serving
Measure cache hit rate per layer in production. A namespace with less than 50% hit rate is either poorly keyed, has TTLs too short, or is caching data that changes too frequently to benefit.
Pattern: Application controls reads; load from DB on miss
// Cache-Aside read pathasync function getUser(id) { // 1. Try cache const hit = await redis.get(`user:${id}`); if (hit) return JSON.parse(hit); // 2. Cache miss: load from DB const user = await db.get(id); // 3. Populate cache with TTL + jitter const ttl = 300 + Math.floor(Math.random() * 30); await redis.setex(`user:${id}`, ttl, JSON.stringify(user)); return user;}// Write path: invalidate cacheasync function updateUser(id, data) { await db.update(id, data); await redis.del(`user:${id}`); // invalidate}
Pros:
Simple and most common
Cache failure doesn’t break the system
Only requested data is cached
Cons:
Cache miss adds latency (DB round trip)
Thundering herd on cold cache
Stale data until invalidation
Pattern: Write to cache and DB synchronously
// Write-Through: write to bothasync function updateUser(id, data) { await db.update(id, data); await redis.setex(`user:${id}`, 3600, JSON.stringify(data));}// Read always hits cache (pre-warmed)async function getUser(id) { const hit = await redis.get(`user:${id}`); if (hit) return JSON.parse(hit); // Rare: cache miss (evicted or cold start) const user = await db.get(id); await redis.setex(`user:${id}`, 3600, JSON.stringify(user)); return user;}
Pros:
Cache always has fresh data
Read always hits cache (fast)
No thundering herd
Cons:
Write latency = cache + DB combined
Wastes cache space (caches everything)
Cache failure blocks writes
Pattern: Write to cache, async flush to DB
// Write-Behind: immediate cache, async DBasync function updateUser(id, data) { // 1. Write to cache (fast) await redis.setex(`user:${id}`, 3600, JSON.stringify(data)); // 2. Queue DB write (async) await queue.send({ type: 'user_update', id: id, data: data });}// Background worker flushes to DBqueue.process(async (job) => { await db.update(job.id, job.data);});
Pros:
Lowest write latency
Batching possible (high throughput)
Reduces DB write load
Cons:
Data loss if cache fails before flush
Eventual consistency
Complex failure recovery
Pattern: Pre-populate cache before expiry
// Refresh-Ahead: proactive refresh for hot keysasync function getUser(id) { const result = await redis.get(`user:${id}`); if (result) { const { value, ttl } = JSON.parse(result); // Refresh if TTL < 20% remaining if (ttl < 60) { // Async refresh (don't block) refreshUserCache(id).catch(err => logger.error(err)); } return value; } // Cache miss: blocking load return await loadAndCache(id);}
Pros:
Eliminates cache-miss latency for hot keys
Always-fresh data for frequently accessed items
Cons:
Wastes resources on cold keys
Requires heuristic for “hot” keys
Cache-Aside is the default choice for most applications. Only use Write-Through or Write-Behind when you have specific requirements and understand the trade-offs.
TTL jitter prevents synchronized mass expiry. Without jitter, all cached items set at the same time expire simultaneously, causing a thundering herd to the database.
// Single key invalidationawait redis.del(`user:${userId}`);// Pattern-based invalidationawait redis.eval(` local keys = redis.call('keys', ARGV[1]) for i=1,#keys,5000 do redis.call('del', unpack(keys, i, math.min(i+4999, #keys))) end return #keys`, 0, 'user:*');// Tagged invalidation (using Redis Sets)await redis.sadd(`tag:orders:user:${userId}`, `order:${orderId}`);// Invalidate all orders for a userconst keys = await redis.smembers(`tag:orders:user:${userId}`);await redis.del(...keys);await redis.del(`tag:orders:user:${userId}`);
KEYS pattern matching in Redis blocks the server. Use SCAN for production, or maintain explicit tag sets for bulk invalidation.
<!-- BAD: must revalidate every time --><script src="/js/app.js"></script><link rel="stylesheet" href="/css/main.css"><!-- GOOD: 1-year TTL + instant "invalidation" via new filename --><script src="/js/app.v2.5.3.min.js"></script><link rel="stylesheet" href="/css/main.a3f8d9c.min.css">
Cache-Control: public, max-age=31536000, immutable// 1 year TTL: file never changes// "Invalidate" by deploying new filename
Use versioned filenames (content hash or semver) with 1-year TTLs for static assets. This achieves instant cache “invalidation” without waiting for CDN TTL expiry.
# Simple key-valueSET user:1001 '{"name":"Alice","email":"[email protected]"}'GET user:1001# Atomic incrementINCR pageviews:post:42INCRBY rate_limit:user:1001 1# Set with expirySETEX session:abc123 3600 '{"userId":1001}'
# Store object fields separatelyHSET user:1001 name "Alice" email "[email protected]" age 30HGET user:1001 nameHGETALL user:1001# Atomic field incrementHINCRBY user:1001 login_count 1# Better than JSON for partial updatesHSET user:1001 email "[email protected]" # only update one field
# Unique set membershipSADD tags:post:42 "redis" "caching" "performance"SISMEMBER tags:post:42 "redis" # O(1) membership test# Set operationsSINTER tags:post:42 tags:post:43 # common tagsSUNION tags:post:42 tags:post:43 # all tags# Random samplingSRANDMEMBER tags:post:42 3 # get 3 random tags
Use allkeys-lru for cache workloads where all keys are candidates for eviction. Use volatile-lru when you have mixed data (persistent + cache) and only cache keys have TTLs.