Skip to main content
The Indicator Service uses Redis as a read-through cache in front of MongoDB. The caching layer has two goals: reduce MongoDB query load for frequently accessed indicators, and serve repeated queries for the same time range and aggregation parameters instantly.

Redis key structure

Three key prefixes are used, each serving a different purpose:
PrefixDefault valuePurpose
indicator_data:CACHE_KEY_PREFIXCached arrays of data points
indicator_miss:CACHE_COUNTER_PREFIXMiss counters per query signature
indicator_stats:STATS_CACHE_PREFIX (hardcoded)Cached statistics objects

Data cache keys

Data cache keys encode the full query signature so that different parameter combinations produce different cache entries:
indicator_data:{indicator_id}:{granularity}:{aggregator}:{extra_params}
Example key for a raw (no aggregation) paginated query:
indicator_data:64b2a1d3c9e5f23e4d7a0123:0:last:limit:100:skip:0:sort:asc
Example key for an hourly-average query with a date range:
indicator_data:64b2a1d3c9e5f23e4d7a0123:1h:avg:end_date:2024-06-30T23:59:59:limit:100:skip:0:sort:asc:start_date:2024-01-01T00:00:00
Extra parameters (skip, limit, sort, start_date, end_date) are always sorted alphabetically before being appended to the key. This guarantees that the same logical query always produces the same cache key regardless of the order query parameters arrive.

Miss counter keys

Miss counter keys track how many times a given base query (indicator + granularity + aggregator) has missed the cache:
indicator_miss:{indicator_id}:{granularity}:{aggregator}:counter
Example:
indicator_miss:64b2a1d3c9e5f23e4d7a0123:1h:avg:counter

Statistics cache keys

Statistics cache keys follow a simpler structure — there are no query parameters because statistics are always computed over the full dataset:
indicator_stats:{indicator_id}
Example:
indicator_stats:64b2a1d3c9e5f23e4d7a0123

Cache TTLs

SettingEnvironment variableDefaultNotes
Data cache TTLCACHE_TTL_SECONDS3600 s (1 hour)Applied to both specific and full data cache entries
Miss counter TTLMISS_COUNTER_TTL90 sCounter expires if no further misses occur within the window
Statistics cache TTLSTATS_CACHE_TTL15 sShort TTL because statistics change with every new data segment
The short 15-second TTL for statistics is intentional. Statistics include min, max, and average across the full dataset, which change every time new data arrives. A 1-hour TTL would surface stale aggregates to dashboards.

Miss-threshold promotion

The service uses a two-tier caching strategy. The first tier caches the exact result for a specific query (including pagination and date filters). The second tier caches the full, unpaginated dataset for a given indicator + granularity + aggregator combination so that slice queries can be served from cache. Promotion to the second tier is triggered by the miss counter:
1

Cache miss

A request arrives for indicator_data:{id}:1h:avg:limit:100:skip:0:sort:asc. Neither the specific key nor the full-dataset key exists in Redis. The query runs against MongoDB.
2

Result cached

The query result is stored under the specific cache key with a TTL of CACHE_TTL_SECONDS (3600 s). The miss counter for indicator_miss:{id}:1h:avg:counter is incremented.
3

Threshold check

If the miss counter reaches MISS_THRESHOLD (default: 5), a background task is scheduled to cache the full (unpaginated) dataset under indicator_data:{id}:1h:avg with the same TTL.
4

Full cache hit

Subsequent requests for any slice of this indicator (different skip/limit/sort/date values) hit the full-dataset key, apply in-memory filtering and pagination, and cache their specific result too — all without touching MongoDB.
Request flow after full-dataset cache is populated:

Client → specific cache key? ──miss──► full cache key? ──hit──► slice in memory → cache specific key → return
The miss counter itself expires after MISS_COUNTER_TTL (90 s). If the pattern of requests stops before reaching the threshold, the counter resets and no full-dataset cache is created. This avoids caching datasets for one-off queries.

Cache invalidation

Cache invalidation happens automatically whenever a new DataSegment is stored or a resource deletion event is processed. The service scans Redis for all keys matching each prefix and deletes them:
# Three prefix scans per indicator on every data write
delete_keys_by_prefix(redis_client, f"indicator_data:{indicator_id}")
delete_keys_by_prefix(redis_client, f"indicator_miss:{indicator_id}")
delete_keys_by_prefix(redis_client, f"indicator_stats:{indicator_id}")
The scan uses the Redis SCAN command with a cursor loop to avoid blocking the server.
Invalidation clears all cache entries for the indicator — specific keys, the full-dataset key, and all miss counters. After a data write, the first few requests will always hit MongoDB until the cache is re-warmed.

Granularity and aggregator parameters

Granularity and aggregator values are part of the cache key, so different aggregation settings produce completely independent cache entries. Two requests for the same indicator with different granularity or aggregator values are cached separately and do not share a miss counter.

Granularity format

Granularity is expressed as {amount}{unit}. Supported units:
UnitMeaning
sseconds
mminutes
hhours
ddays
wweeks
Mmonths
yyears
Use 0 (or omit the parameter) for raw data with no time-bucketing.

Aggregator values

AggregatorDescription
last (default)Last value in each time bucket
firstFirst value in each time bucket
sumSum of all values in each bucket
avgArithmetic mean
medianMedian value
maxMaximum value
minMinimum value
p{N}Nth percentile (e.g. p95 for the 95th percentile)
# Request hourly median — cached under indicator_data:{id}:1h:median:...
curl "https://api.example.com/indicators/64b2a1d3c9e5f23e4d7a0123/data?granularity=1h&aggregator=median"

# Request 95th percentile per day
curl "https://api.example.com/indicators/64b2a1d3c9e5f23e4d7a0123/data?granularity=1d&aggregator=p95"

Configuration reference

All cache settings can be overridden via environment variables:
# .env
CACHE_KEY_PREFIX=indicator_data:
CACHE_COUNTER_PREFIX=indicator_miss:
CACHE_TTL_SECONDS=3600
MISS_COUNTER_TTL=90
MISS_THRESHOLD=5
STATS_CACHE_TTL=15

Build docs developers (and LLMs) love