Skip to main content

What is Redis?

Redis (Remote Dictionary Server) is a multi-modal database that provides sub-millisecond latency. The core idea behind Redis is that a cache can also act as a full-fledged database.
Redis is one of the most popular data stores in the world, adopted by high-traffic websites like Airbnb, Uber, and Slack.

Why is Redis So Fast?

Redis Performance There are three main reasons why Redis delivers exceptional performance:

RAM-Based Storage

Redis stores data in RAM, which is at least 1000 times faster than random disk access.

IO Multiplexing

Leverages IO multiplexing and single-threaded execution loop for execution efficiency.

Optimized Data Structures

Uses several efficient lower-level data structures for fast operations.

Redis Data Structures

Redis stores data in key-value format and supports various data structures:
  • Strings: Simple key-value pairs
  • Bitmaps: Efficient for tracking binary states
  • Lists: Ordered collections
  • Sets: Unordered collections of unique items
  • Sorted Sets: Sets with scores for ranking
  • Hash: Key-value pairs within a key
  • JSON: Native JSON support

Common Use Cases

Redis Use Cases Redis can be used in a variety of scenarios beyond just caching:

Session Management

Share user session data among different services for consistent user experiences across your application.
# Store session data
redis.set(f"session:{user_id}", json.dumps(session_data), ex=3600)

# Retrieve session data
session = json.loads(redis.get(f"session:{user_id}"))

Caching

Cache objects or pages, especially for hotspot data that’s frequently accessed.
# Cache-aside pattern
def get_user(user_id):
    # Try cache first
    cached = redis.get(f"user:{user_id}")
    if cached:
        return json.loads(cached)
    
    # Cache miss - fetch from database
    user = db.query("SELECT * FROM users WHERE id = ?", user_id)
    
    # Store in cache for future requests
    redis.setex(f"user:{user_id}", 3600, json.dumps(user))
    return user

Distributed Lock

Use a Redis string to acquire locks among distributed services, ensuring only one service can execute a critical section at a time.
# Acquire lock
lock = redis.set("lock:resource", "token", nx=True, ex=30)

if lock:
    try:
        # Critical section
        process_resource()
    finally:
        # Release lock
        redis.delete("lock:resource")

Counter and Rate Limiter

Count metrics like article views or likes:
# Increment view count
redis.incr(f"article:{article_id}:views")

# Get view count
views = redis.get(f"article:{article_id}:views")

Shopping Cart

Use Redis Hash to represent key-value pairs in a shopping cart.
# Add item to cart
redis.hset(f"cart:{user_id}", product_id, quantity)

# Get entire cart
cart = redis.hgetall(f"cart:{user_id}")

# Remove item
redis.hdel(f"cart:{user_id}", product_id)

Ranking Systems

Use ZSet (Sorted Sets) to sort and rank items:
# Add article with score (timestamp or votes)
redis.zadd("trending:articles", {article_id: score})

# Get top 10 articles
top_articles = redis.zrevrange("trending:articles", 0, 9, withscores=True)

Message Queue

Use List data structure for implementing a simple message queue.
# Producer - push to queue
redis.lpush("task_queue", json.dumps(task))

# Consumer - pop from queue
task = redis.brpop("task_queue", timeout=5)

User Retention Tracking

Use Bitmap to represent daily user login and calculate retention:
# Mark user as active on a specific day
redis.setbit("active:2024-02-28", user_id, 1)

# Count active users for the day
active_count = redis.bitcount("active:2024-02-28")

# Check if specific user was active
was_active = redis.getbit("active:2024-02-28", user_id)

Data Persistence

Redis Persistence While Redis is an in-memory database, it provides mechanisms to persist data to disk:
Redis executes commands to modify data in memory first, then writes to the log file. AOF records the commands instead of the data, simplifying recovery.Pros:
  • More durable - logs every write operation
  • Easy to understand and debug
Cons:
  • Larger file size
  • Slower recovery for large datasets
Records snapshots of data at specific points in time. When recovery is needed, the snapshot is loaded into memory.How it works:
  1. Main thread forks a ‘bgsave’ sub-process
  2. Sub-process reads data and writes to RDB file
  3. Main thread continues serving requests
  4. If data is modified, a copy is created (copy-on-write)
Pros:
  • Fast recovery
  • Compact file size
Cons:
  • Potential data loss between snapshots
Data persistence is not performed on the critical path and doesn’t block the write process in Redis.

Redis Architecture Evolution

Redis Architecture Evolution

2010 - Standalone Redis

When Redis 1.0 was released, the architecture was simple - a single instance used as cache. However, restarting Redis meant losing all data.

2013 - Persistence & Replication

Redis 2.8 introduced:
  • RDB snapshots to persist data
  • AOF (Append-Only-File) logging
  • Replication with primary-replica architecture for high availability

2013 - Sentinel

Redis 2.8 added Sentinel to monitor instances and perform:
  • Monitoring
  • Notification
  • Automatic failover
  • Configuration management

2015 - Cluster

Redis 3.0 introduced distributed database solution through sharding:
  • Data divided into 16,384 slots
  • Each node responsible for portion of slots
  • Automatic resharding support

Modern Redis (2017-2020)

  • Redis 5.0: Added Stream data type for event sourcing
  • Redis 6.0: Introduced multi-threaded I/O in network module

Redis Modules

Redis modules extend functionality beyond core features:

RediSearch

Full-text search, secondary indexing, and query engine

RedisJSON

Native JSON document storage and manipulation

RedisGraph

Graph database for relationship queries

RedisBloom

Probabilistic data structures (Bloom filters, Cuckoo filters)

RedisTimeSeries

Time-series data storage and queries

RedisAI

Machine learning model serving

Best Practices

Always set expiration times on cached data to prevent memory bloat and stale data issues.
1

Choose the right data structure

Select the appropriate Redis data type for your use case - don’t default to strings for everything.
2

Set memory limits

Configure maxmemory and choose an appropriate eviction policy.
3

Monitor key metrics

Track memory usage, hit rate, eviction rate, and latency.
4

Use connection pooling

Reuse connections instead of creating new ones for each operation.
5

Avoid big keys

Large keys can impact Redis performance. Split large data across multiple keys.

Next Steps

Caching Strategies

Learn different caching patterns and when to use them

Cache Eviction

Understand how to manage cache size with eviction policies

CDN Caching

Explore content delivery networks for edge caching

Build docs developers (and LLMs) love