Rate Limiting
KoreShield uses Redis for distributed rate limiting and shared statistics. When Redis is enabled, multiple proxy instances can enforce consistent rate limits across your deployment.Enable Redis
Configure Redis in theredis section of your config.yaml:
Configuration Parameters
Enable Redis-based distributed rate limiting
Redis connection URL. Supports
redis://, rediss:// (TLS), and unix:// schemes.Format: redis://[username:password@]host:port/databaseConnection URL Formats
Standard Redis
Redis with Authentication
Redis with TLS (Production)
The
rediss:// scheme enables TLS encryption. Always use TLS in production environments.Unix Socket
Production Configuration
Managed Redis Service (Recommended)
Use a managed Redis service for production deployments:Popular managed Redis services:
- AWS ElastiCache - Fully managed Redis on AWS
- Redis Cloud - Official Redis managed service
- Azure Cache for Redis - Microsoft Azure Redis service
- Google Cloud Memorystore - Google Cloud Redis service
High Availability Setup
For production, use Redis Sentinel or Redis Cluster:Connection Pooling
KoreShield automatically manages connection pooling. For high-traffic deployments, monitor Redis connection metrics:Production Guidance
How do I secure Redis connections?
How do I secure Redis connections?
Follow these security best practices:
- Enable TLS - Use
rediss://URLs - Require Authentication - Set a strong password
- Network Isolation - Run Redis in a private network
- Firewall Rules - Restrict access to KoreShield instances
- Regular Updates - Keep Redis version current
How do I ensure all instances use the same Redis?
How do I ensure all instances use the same Redis?
All KoreShield proxy instances must point to the same Redis instance or cluster:
- Use a shared configuration file or environment variables
- Verify connection in startup logs:
grep "Redis" koreshield.log - Test with multiple instances to confirm rate limits are shared
What Redis memory settings should I use?
What Redis memory settings should I use?
Configure Redis memory limits and eviction policies:Recommended settings:
- maxmemory: 2-4GB for typical deployments
- maxmemory-policy:
volatile-lru(evict least recently used keys with TTL)
How do I monitor Redis performance?
How do I monitor Redis performance?
Monitor these key Redis metrics:Set up alerts for:
- High memory usage (>80%)
- Connection errors
- Slow queries
- Replication lag (if using HA)
Validation
Verify Redis Connectivity
Check KoreShield startup logs to confirm Redis connection:Test Rate Limiting
Load test to verify requests are throttled consistently:Rate limits should be enforced consistently across all KoreShield instances when Redis is properly configured.
Distributed Behavior Test
Verify rate limiting works across multiple instances:Send Traffic to Different Instances
Route requests to different instances using a load balancer or manual testing
Troubleshooting
Connection Failures
Common connection issues:Performance Issues
If Redis is slow:- Check latency:
redis-cli --latency - Review slow queries:
redis-cli SLOWLOG GET 10 - Monitor memory:
redis-cli INFO memory - Check network: Ensure Redis is on same network/region
Data Persistence
Rate limiting data is ephemeral (TTL-based). You don’t need Redis persistence (RDB/AOF) for rate limiting unless you want to preserve limits across Redis restarts.
Environment Variables
Use environment variables for sensitive connection details:Related Documentation
Redis Integration
Detailed Redis setup and integration guide
General Settings
Configure other global settings
Monitoring
Monitor rate limiting metrics
Production Checklist
Complete production deployment guide