Overview
The rate limiter controls how frequently sensor readings are saved to TimescaleDB, reducing database storage costs and improving query performance. It uses a configurable time-based throttling mechanism that works across multiple API pods using Redis.Important: Rate limiting only affects what is saved to TimescaleDB. All messages continue to be:
- Cached in Redis in real-time
- Broadcast to WebSocket clients immediately
- Available for live monitoring
Configuration
Configure the rate limiter inapplication.yaml:
Configuration Parameters
Enable or disable the rate limiter. When disabled, all sensor readings are saved to TimescaleDB.
Minimum number of seconds that must elapse between saving readings from the same sensor. Readings received within this interval are dropped from TimescaleDB but still cached and broadcast.
Use Redis for distributed rate limiting across multiple API pods. When
false, uses in-memory cache (not recommended for production with multiple pods).Get Rate Limiter Statistics
Endpoint
Response
Whether the rate limiter is currently enabled
Configured minimum interval between saves (in seconds)
Whether Redis is being used for distributed rate limiting
Total number of sensor readings received since API startup
Total number of readings saved to TimescaleDB
Total number of readings dropped due to rate limiting
Percentage of readings dropped (calculated as
totalDropped / totalReceived * 100)Number of entries in the local in-memory cache (only used when
useRedis is false)Response Example
In this example, 90% of readings were dropped from TimescaleDB. This is expected when sensors send data every 3 seconds but the interval is set to 30 seconds (90% reduction).
Reset Statistics
Endpoint
Response
Status of the operation (
ok)Confirmation message
Response Example
Resetting statistics only clears the counters (
totalReceived, totalSaved, totalDropped). It does not affect the rate limiting behavior or Redis timestamps.How Rate Limiting Works
The rate limiter uses a sliding window approach to determine whether a sensor reading should be saved:Decision Flow
Redis Storage
WhenuseRedis: true, the rate limiter stores timestamps in Redis:
Key Pattern:
Example Timeline
| Time | Sensor Reading | Last Save | Elapsed | Action |
|---|---|---|---|---|
| 10:00:00 | 25.5°C | None | N/A | Save (first reading) |
| 10:00:05 | 25.6°C | 10:00:00 | 5s | Drop (< 30s) |
| 10:00:10 | 25.7°C | 10:00:00 | 10s | Drop (< 30s) |
| 10:00:30 | 25.8°C | 10:00:00 | 30s | Save (>= 30s) |
| 10:00:35 | 25.9°C | 10:00:30 | 5s | Drop (< 30s) |
All readings (both saved and dropped) are cached in Redis and broadcast to WebSocket clients. The rate limiter only affects TimescaleDB storage.
Per-Tenant Rate Limits
Rate limiting is applied per sensor, not per tenant. This means:- Each
{greenhouseId}:{sensorId}combination has its own rate limit - Different tenants can have sensors with the same ID without conflicts
- The rate limit is controlled globally via
min-interval-seconds
Redis Key Examples
Throttling Responses
The rate limiter does not return HTTP throttling responses. It operates silently:- No 429 (Too Many Requests) status codes
- No client-side throttling
- No backpressure to MQTT clients
- All MQTT messages are accepted
- All messages are cached in Redis
- All messages are broadcast to WebSocket
- Only TimescaleDB writes are throttled
Why Silent Throttling?
- Real-time monitoring: Clients always see the latest data via WebSocket
- No data loss: Redis cache retains last 1000 messages
- Storage optimization: TimescaleDB only stores time-series data at the configured interval
- MQTT compatibility: Sensors don’t need to handle backpressure
Local Cache Fallback
When Redis is unavailable, the rate limiter automatically falls back to an in-memory cache:Fallback Behavior
- Thread-safe: Uses
ConcurrentHashMapfor multi-threaded access - Automatic cleanup: Removes entries older than
minIntervalSeconds * 2when cache exceeds 1000 entries - Per-pod: Each API pod has its own cache (not distributed)
Performance Characteristics
Redis Mode (useRedis: true)
- Latency: ~1-2ms per rate limit check (Redis GET + SET)
- Throughput: 10,000+ checks/second per pod
- Scalability: Consistent behavior across multiple pods
- Memory: O(number of unique sensors * 100 bytes)
Local Cache Mode (useRedis: false)
- Latency: ~0.1ms per rate limit check (in-memory)
- Throughput: 100,000+ checks/second per pod
- Scalability: Each pod rate-limits independently
- Memory: O(number of unique sensors * 200 bytes) per pod
Monitoring and Tuning
Recommended min-interval-seconds Values
| Sensor Frequency | Recommended Interval | Storage Reduction |
|---|---|---|
| Every 3 seconds | 30 seconds | 90% |
| Every 5 seconds | 60 seconds | 92% |
| Every 10 seconds | 60 seconds | 83% |
| Every 30 seconds | 300 seconds (5 min) | 90% |
Calculating Drop Rate
Expected drop rate formula:Monitoring Queries
Check Redis memory usage:Best Practices
Enable Rate Limiting When
- Sensors send data more frequently than needed for analysis (e.g., every 3 seconds)
- TimescaleDB storage costs are a concern
- Query performance on large datasets needs improvement
- You have real-time monitoring via WebSocket (data not lost)
Keep Rate Limiting Disabled When
- You need every sensor reading for compliance/auditing
- Sensors already send data at the desired interval
- TimescaleDB storage and performance are not concerns
- You require complete historical data
Configuration Tips
- Start with a high interval (e.g., 60 seconds) and adjust based on needs
- Monitor drop rate using
/api/v1/rate-limiter/stats - Use Redis in production for consistent multi-pod behavior
- Set
min-interval-secondsbased on your analysis requirements- Hourly dashboards: 60 seconds
- Daily reports: 300 seconds (5 minutes)
- Monthly trends: 900 seconds (15 minutes)
TimescaleDB continuous aggregates can further optimize queries by pre-computing hourly/daily averages. See the database migration V11 for details.
Troubleshooting
High Drop Rate
Symptom: Drop rate is higher than expected (e.g., 98% instead of 90%) Possible Causes:- Sensor frequency is faster than expected
- Multiple sensors publishing to the same topic
- Clock skew between API pods and sensors
Low Drop Rate
Symptom: Drop rate is lower than expected (e.g., 50% instead of 90%) Possible Causes:min-interval-secondsis too low- Redis is unavailable (fallback to local cache)
- Multiple API pods with
useRedis: false
No Drops (0%)
Symptom: Drop rate is 0% even with rate limiting enabled Possible Causes:- Rate limiter is disabled in configuration
- Sensor frequency is slower than
min-interval-seconds - Code path bypasses rate limiter
application.yaml and verify sensor publish frequency