Skip to main content

Redis Caching Strategy

The Invernaderos API implements a cache-aside pattern using Redis Sorted Sets for real-time sensor data caching.

Overview

Data Structure

Redis Sorted Set (ZSET)Timestamp-ordered messages

Capacity

Last 1000 messages per tenant24-hour TTL

Architecture

Why Redis Sorted Sets?

Redis Sorted Sets provide O(log N) time complexity for range queries, making them ideal for time-series data.
Key: "greenhouse:messages:{tenantId}"
Type: Sorted Set (ZSET)
Score: Timestamp (epoch milliseconds)
Value: JSON-serialized sensor data

Example:
┌─────────────────┬──────────────────────────────────────┐
│ Score           │ Value (JSON)                         │
├─────────────────┼──────────────────────────────────────┤
│ 1709827200000   │ {"temp":25.5,"humidity":65,...}     │
│ 1709827205000   │ {"temp":25.6,"humidity":64,...}     │
│ 1709827210000   │ {"temp":25.4,"humidity":66,...}     │
└─────────────────┴──────────────────────────────────────┘

   Sorted by timestamp (DESC)

Multi-Tenant Isolation

Each tenant has its own isolated cache key to prevent cross-tenant data leakage.
private fun getMessagesKey(tenantId: String?): String {
    val safeTenantId = tenantId?.takeIf { it.isNotBlank() } ?: "DEFAULT"
    return "greenhouse:messages:$safeTenantId"
}

// Examples:
// tenant "SARA"    → "greenhouse:messages:SARA"
// tenant "001"     → "greenhouse:messages:001"
// legacy (null)    → "greenhouse:messages:DEFAULT"

Implementation

GreenhouseCacheService.kt

The GreenhouseCacheService class implements all Redis operations with proper error handling and logging.

Redis Operations

Cache Operations with Time Complexity

ZADD
O(log N)
Add message to sorted set
redisTemplate.opsForZSet().add(key, jsonValue, score)
ZREVRANGE
O(log N + M)
Get last N messages (M = result size)
redisTemplate.opsForZSet().reverseRange(key, 0, limit - 1)
ZREVRANGEBYSCORE
O(log N + M)
Get messages by time range
redisTemplate.opsForZSet().reverseRangeByScore(key, minScore, maxScore)
ZREMRANGEBYRANK
O(log N + M)
Remove oldest messages when exceeding limit
redisTemplate.opsForZSet().removeRange(key, 0, toRemove - 1)
ZCARD
O(1)
Count total messages
redisTemplate.opsForZSet().size(key)
DEL
O(1)
Delete entire cache key
redisTemplate.delete(key)
EXPIRE
O(1)
Set 24-hour TTL (renewed on each write)
redisTemplate.expire(key, 24, TimeUnit.HOURS)

Performance Characteristics

Write Speed

O(log N) insert~10,000 writes/sec

Read Speed

O(log N + M) queryLess than 1ms for 100 messages

Memory

~500 bytes per message500KB per tenant (1000 msgs)

Cache Workflow

Message Flow

Cache Eviction Strategy

1

New message arrives

MQTT listener receives sensor data and parses to RealDataDto
2

Add to Redis

ZADD with timestamp as score (epoch milliseconds)
3

Check size

ZCARD to count current messages
4

Evict if needed

If > 1000 messages, ZREMRANGEBYRANK removes oldest
5

Renew TTL

EXPIRE resets 24-hour TTL on the key
The cache is self-cleaning. Old messages are automatically removed both by the 1000-message limit and 24-hour TTL.

Redis Configuration

Application Configuration

application.yaml
spring:
  data:
    redis:
      host: ${REDIS_HOST:138.199.157.58}     # K8s node IP (default)
      port: ${REDIS_PORT:30379}               # NodePort (DEV: 6379)
      password: ${REDIS_PASSWORD}             # From K8s Secret
      database: 0
      timeout: 60000ms                        # 60 seconds
      connect-timeout: 10000ms                # 10 seconds
      client-type: lettuce                    # Lettuce client (async, reactive)

      lettuce:
        pool:
          max-active: 100                     # Max connections
          max-idle: 50                        # Max idle connections
          min-idle: 10                        # Min idle connections
          max-wait: 3000ms                    # Max wait for connection
        shutdown-timeout: 2000ms

  cache:
    type: redis
    redis:
      time-to-live: 600000                    # 10 minutes (600000 ms)
      cache-null-values: false
      key-prefix: "ts-app::"                 # Prefix for @Cacheable keys
      use-key-prefix: true

Kubernetes Deployment

statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
  namespace: apptolast-invernadero-api
spec:
  serviceName: redis
  replicas: 1
  template:
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        command: ["redis-server", "/etc/redis/redis.conf"]
        ports:
        - containerPort: 6379
        resources:
          requests:
            cpu: 250m
            memory: 512Mi
          limits:
            cpu: 500m
            memory: 1Gi
        volumeMounts:
        - name: data
          mountPath: /data
        - name: config
          mountPath: /etc/redis
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi
redis.conf
# Memory Management
maxmemory 900mb
maxmemory-policy volatile-lru     # Evict keys with TTL using LRU

# Persistence
save 300 10                       # Save every 5 min if 10+ changes
save 60 10000                     # Save every 1 min if 10000+ changes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data

# Performance
timeout 300                       # Close idle clients after 5 minutes
tcp-keepalive 60
maxclients 10000

# Security
requirepass ${REDIS_PASSWORD}
protected-mode yes
rename-command FLUSHDB ""         # Disabled for safety
rename-command FLUSHALL ""        # Disabled for safety
rename-command CONFIG ""          # Disabled for safety

# Logging
loglevel notice
logfile ""

Connection Pooling (Lettuce)

Max Active

100 connectionsHandles 100 concurrent requests

Min Idle

10 connectionsPre-warmed for instant response

Max Idle

50 connectionsBalance between performance and resources

Max Wait

3 secondsTimeout for acquiring connection
Lettuce is preferred over Jedis because:
  • Asynchronous, non-blocking I/O with Netty
  • Thread-safe (single connection shared)
  • Reactive Streams support (Spring WebFlux)
  • Auto-reconnect on failure

Usage Examples

Caching Sensor Data

@Service
class MqttMessageProcessor(
    private val cacheService: GreenhouseCacheService,
    private val repository: ReadingRepository
) {
    
    @Transactional("timescaleTransactionManager")
    fun processGreenhouseData(payload: String, tenantId: String) {
        // 1. Parse JSON to DTO
        val message = payload.toRealDataDto(Instant.now(), tenantId)
        
        // 2. Cache in Redis (fast, non-blocking)
        cacheService.cacheMessage(message)
        
        // 3. Save to TimescaleDB (persistent)
        val readings = message.toReadings()
        repository.saveAll(readings)
        
        // 4. Publish to WebSocket clients
        eventPublisher.publishEvent(GreenhouseMessageEvent(this, message))
    }
}

Retrieving Cached Data

// Get last 100 messages for tenant "SARA"
val recentMessages = cacheService.getRecentMessages(
    tenantId = "SARA",
    limit = 100
)

recentMessages.forEach { message ->
    println("Temp: ${message.temperaturaInvernadero01}°C at ${message.timestamp}")
}

REST Endpoint Example

@RestController
@RequestMapping("/api/greenhouse")
class GreenhouseController(
    private val cacheService: GreenhouseCacheService
) {
    
    @GetMapping("/cache/recent")
    fun getRecentMessages(
        @RequestParam(required = false) tenantId: String?,
        @RequestParam(defaultValue = "100") limit: Int
    ): ResponseEntity<List<RealDataDto>> {
        val messages = cacheService.getRecentMessages(tenantId, limit)
        return ResponseEntity.ok(messages)
    }
    
    @GetMapping("/cache/latest")
    fun getLatestMessage(
        @RequestParam(required = false) tenantId: String?
    ): ResponseEntity<RealDataDto> {
        val message = cacheService.getLatestMessage(tenantId)
        return if (message != null) {
            ResponseEntity.ok(message)
        } else {
            ResponseEntity.notFound().build()
        }
    }
    
    @GetMapping("/cache/info")
    fun getCacheInfo(
        @RequestParam(required = false) tenantId: String?
    ): ResponseEntity<Map<String, Any>> {
        val stats = cacheService.getCacheStats(tenantId)
        return ResponseEntity.ok(stats)
    }
}

Monitoring and Maintenance

Health Check Endpoint

# Check cache connectivity
curl http://localhost:8080/api/greenhouse/cache/info

# Response:
{
  "totalMessages": 1000,
  "tenantId": "SARA",
  "cacheType": "Redis Sorted Set",
  "maxCapacity": 1000,
  "utilizationPercentage": 100.0
}

Redis CLI Commands

# Connect to Redis
redis-cli -a "${REDIS_PASSWORD}"

# Count messages for tenant SARA
ZCARD greenhouse:messages:SARA
# Output: (integer) 1000

# Get oldest message timestamp
ZRANGE greenhouse:messages:SARA 0 0 WITHSCORES
# Output: 1) "{...json...}"
#         2) "1709827200000"

# Get newest message timestamp
ZREVRANGE greenhouse:messages:SARA 0 0 WITHSCORES
# Output: 1) "{...json...}"
#         2) "1709834400000"

Performance Tuning

Eviction Policy

volatile-lruEvicts least recently used keys with TTL

Persistence

RDB SnapshotsEvery 5 min (10+ changes) or 1 min (10k+ changes)

Compression

RDB Compression: ONReduces disk usage by ~70%

Connection Pool

100 max connectionsSupports 100 concurrent API requests
Memory Limit: Redis is configured with maxmemory 900mb. If exceeded, keys with TTL are evicted using LRU algorithm.Current Usage: ~500KB per tenant (1000 messages × 500 bytes)

Troubleshooting

Symptoms: New sensor data not appearing in cachePossible Causes:
  1. Redis connection timeout
  2. Wrong tenant ID
  3. Redis memory full (eviction)
Solutions:
# Check Redis connectivity
redis-cli -a "${REDIS_PASSWORD}" PING

# Check memory usage
redis-cli -a "${REDIS_PASSWORD}" INFO memory

# Check cache key exists
redis-cli -a "${REDIS_PASSWORD}" EXISTS greenhouse:messages:SARA

# View application logs
kubectl logs -f deployment/invernaderos-api-prod | grep "Cache"
Symptoms: Redis using >900MB memoryPossible Causes:
  1. Too many tenants with 1000 messages each
  2. Large JSON payloads
  3. Memory fragmentation
Solutions:
# Check number of keys
redis-cli -a "${REDIS_PASSWORD}" DBSIZE

# Get memory usage per key type
redis-cli -a "${REDIS_PASSWORD}" --bigkeys

# Check fragmentation ratio
redis-cli -a "${REDIS_PASSWORD}" INFO memory | grep fragmentation

# If fragmentation > 1.5, restart Redis
kubectl rollout restart statefulset/redis -n apptolast-invernadero-api
Symptoms: API response time >500ms for cache queriesPossible Causes:
  1. Large result sets (requesting 1000+ messages)
  2. Redis CPU bottleneck
  3. Network latency
Solutions:
# Check Redis CPU usage
kubectl top pod -n apptolast-invernadero-api -l app=redis

# Monitor slow queries (>10ms)
redis-cli -a "${REDIS_PASSWORD}" SLOWLOG GET 10

# Reduce query limit
# Instead of: getRecentMessages(tenantId, 1000)
# Use:        getRecentMessages(tenantId, 100)

Database Architecture

TimescaleDB and PostgreSQL setup

Migrations

Flyway migration history

API Reference

Cache endpoints documentation

Build docs developers (and LLMs) love