Skip to main content
The Currency Converter API uses Redis as a high-performance cache to minimize external API calls, reduce response times, and control costs. This page documents the caching patterns, TTL configurations, and cache key design.

Cache overview

Redis serves as a write-through cache for exchange rates and supported currencies:
  • Exchange rates: Cached for 5 minutes
  • Supported currencies: Cached for 24 hours
  • Data format: JSON with Decimal serialized as strings for precision
Write-through caching means every fresh fetch immediately populates Redis, so the second request for any pair within the TTL window returns instantly.

Cache key structure

The cache uses a hierarchical key naming scheme for clarity and organization. From infrastructure/cache/redis_cache.py:17:
def _make_rate_key(self, from_currency: str, to_currency: str) -> str:
    return f'rate:{from_currency}:{to_currency}'

Key patterns

PatternExamplePurposeTTL
rate:{from}:{to}rate:USD:EURExchange rate data5 minutes
currencies:supportedcurrencies:supportedList of currency codes24 hours
Using structured keys (colon-separated namespaces) makes debugging easier and enables pattern-based operations like KEYS rate:* in development.

TTL configuration

Time-to-live values are configured in the RedisCacheService constructor. From infrastructure/cache/redis_cache.py:11:
class RedisCacheService:
    def __init__(self, redis_client: redis.Redis):
        self.redis = redis_client
        self.rate_ttl = timedelta(minutes=5)
        self.currency_ttl = timedelta(hours=24)

TTL rationale

Cache TypeTTLReasoning
Exchange rates5 minutesBalances freshness with API cost. Most real-world use cases don’t require second-level accuracy.
Supported currencies24 hoursCurrency lists rarely change. Long TTL reduces database queries and provider API calls.
Exchange rates update frequently but not instantly. A 5-minute window provides:
  • Fresh enough for most applications (e-commerce, travel booking, reporting)
  • Cost effective by limiting API calls to 1 per pair per 5 minutes
  • Fast responses for repeated conversions during the cache window
For high-frequency trading or real-time applications, you can reduce this to 1 minute or implement a separate “real-time” endpoint.

Cache operations

Reading from cache

The get_rate method retrieves cached rates and deserializes them into domain objects. From infrastructure/cache/redis_cache.py:20:
async def get_rate(self, from_currency: str, to_currency: str) -> ExchangeRate | None:
    key = self._make_rate_key(from_currency, to_currency)
    data = await self.redis.get(key)

    if not data:
        return None

    try:
        rate_dict = json.loads(data)
        return ExchangeRate(
            from_currency=rate_dict['from_currency'],
            to_currency=rate_dict['to_currency'],
            rate=Decimal(rate_dict['rate']),
            timestamp=datetime.fromisoformat(rate_dict['timestamp']),
            source=rate_dict['source'],
        )
    except json.decoder.JSONDecodeError as e:
        raise CacheError('Invalid json data decoded') from e
Decimal is serialized as a string (not float) to preserve precision. Financial calculations require exact decimal arithmetic.

Writing to cache

The set_rate method stores rates with automatic expiration. From infrastructure/cache/redis_cache.py:39:
async def set_rate(self, rate: ExchangeRate) -> None:
    key = self._make_rate_key(rate.from_currency, rate.to_currency)

    rate_dict = {
        'from_currency': rate.from_currency,
        'to_currency': rate.to_currency,
        'rate': str(rate.rate),
        'timestamp': rate.timestamp.isoformat(),
        'source': rate.source,
    }

    await self.redis.setex(key, self.rate_ttl, json.dumps(rate_dict))
Using setex (SET with EXpiry) atomically sets the value and TTL in a single command, preventing race conditions.

Supported currencies cache

Supported currencies are cached as a JSON array. From infrastructure/cache/redis_cache.py:52:
async def get_supported_currencies(self) -> list[str] | None:
    data = await self.redis.get('currencies:supported')
    if not data:
        return None
    try:
        return json.loads(data)
    except json.decoder.JSONDecodeError as e:
        raise CacheError('Invalid json data decoded') from e

async def set_supported_currencies(self, currencies: list[str]) -> None:
    await self.redis.setex('currencies:supported', self.currency_ttl, json.dumps(currencies))

Cache integration in rate service

The RateService checks the cache before fetching from providers. From application/services/rate_service.py:30:
async def get_rate(self, from_currency: str, to_currency: str) -> ExchangeRate:
    await self.currency_service.validate_currency(from_currency)
    await self.currency_service.validate_currency(to_currency)

    cached_rate = await self.repository.cache.get_rate(from_currency, to_currency)
    if cached_rate:
        logger.info(f'Cache HIT: {from_currency}/{to_currency}')
        return cached_rate

    logger.info(f'Cache MISS: {from_currency}/{to_currency}, fetching from providers')

    aggregated = await self._aggregate_rates(from_currency, to_currency)

    rate = ExchangeRate(
        from_currency=aggregated.from_currency,
        to_currency=aggregated.to_currency,
        rate=aggregated.rate,
        timestamp=aggregated.timestamp,
        source='averaged' if len(aggregated.sources) > 1 else aggregated.sources[0],
    )

    await self.repository.save_rate(rate)

    return rate

Cache flow diagram

The validation check itself queries the cache (for supported currencies). By checking rate cache first, you can return immediately if the rate is cached, avoiding even the validation queries.However, in this implementation, validation happens first to ensure early error responses for invalid currencies.

Cache warming

Supported currencies are cached during application startup to avoid cold-start latency. From application/services/currency_service.py:38:
await self.repository.save_supported_currencies(currency_models)
logger.info(f'Saved {len(supported_codes)} supported currencies.')
The repository’s save_supported_currencies method updates both the database and Redis:
# From repository implementation
await self.cache_service.set_supported_currencies([c.code for c in currencies])
This cache warming ensures the first requests after deployment are just as fast as subsequent requests.

Data serialization

Decimal precision

Financial calculations require exact decimal arithmetic. Floating-point numbers introduce rounding errors.
# Correct: Serialize Decimal as string
'rate': str(rate.rate)  # "0.925500"

# Incorrect: Serialize Decimal as float
'rate': float(rate.rate)  # 0.9255000000000001 (precision loss)
From infrastructure/cache/redis_cache.py:45:
rate_dict = {
    'from_currency': rate.from_currency,
    'to_currency': rate.to_currency,
    'rate': str(rate.rate),  # Preserve precision
    'timestamp': rate.timestamp.isoformat(),
    'source': rate.source,
}

DateTime serialization

Datetimes are stored as ISO 8601 strings for readability and compatibility:
# Serialization
'timestamp': rate.timestamp.isoformat()  # "2026-03-04T15:30:00.123456"

# Deserialization
timestamp=datetime.fromisoformat(rate_dict['timestamp'])

Cache invalidation

The system uses TTL-based expiration rather than explicit invalidation:
  • No manual cache invalidation needed
  • Redis automatically removes expired keys
  • Fresh data is fetched transparently on cache miss
If you need to force a refresh (e.g., after detecting stale data), you can implement a cache flush endpoint, but this is not included by default.

Error handling

Cache errors are wrapped in domain exceptions and logged. From infrastructure/cache/redis_cache.py:36:
except json.decoder.JSONDecodeError as e:
    raise CacheError('Invalid json data decoded') from e
Error TypeCauseHandling
json.JSONDecodeErrorCorrupted cache dataRaise CacheError, log error, treat as cache miss
ConnectionErrorRedis unavailablePropagate exception, return 503
TimeoutErrorSlow Redis responsePropagate exception, consider fallback to DB
Cache errors are non-fatal for read operations. If Redis is unavailable, the service can fall back to fetching from providers (at the cost of increased latency).

Performance impact

Cache hit scenario

Request → Redis → Response (< 10ms)
No external API calls, no database queries (except initial currency validation).

Cache miss scenario

Request → Redis MISS → Providers (parallel) → Redis SET → DB INSERT → Response (200-500ms)
Subsequent requests within 5 minutes hit cache.

Cache effectiveness metrics

MetricTargetMeasurement
Hit rate> 80%cache_hits / total_requests
Response time (hit)< 50msP95 latency for cached requests
Response time (miss)< 1000msP95 latency for provider fetch

Configuration best practices

Production TTL

Keep 5-minute default for most use cases. Adjust based on traffic patterns.

Redis memory

Monitor memory usage. Each rate is ~200 bytes. Plan capacity accordingly.

Eviction policy

Use allkeys-lru to automatically remove least-used keys when memory is full.

Persistence

Enable RDB snapshots for cache recovery after restarts.

Monitoring and observability

Key metrics to track:
# Log cache hits/misses for observability
logger.info(f'Cache HIT: {from_currency}/{to_currency}')
logger.info(f'Cache MISS: {from_currency}/{to_currency}, fetching from providers')
Implement metrics collection for:
  • Cache hit rate by currency pair
  • Average response time (cached vs uncached)
  • Redis connection pool statistics
  • TTL distribution (how long until expiration)

Redis configuration

Connection is established at startup from api/dependencies.py:42:
deps.redis_client = Redis.from_url(settings.REDIS_URL, decode_responses=True)
deps.redis_cache = RedisCacheService(deps.redis_client)
decode_responses=True automatically decodes Redis bytes to strings, simplifying JSON parsing.
# redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
save 900 1
save 300 10
save 60 10000

Troubleshooting

  • Check Redis connection in application logs
  • Verify REDIS_URL environment variable
  • Test Redis connectivity: redis-cli ping
  • Check if TTL is set correctly: redis-cli TTL rate:USD:EUR
  • Verify TTL configuration in RedisCacheService
  • Check if rates are being written after provider fetch
  • Review logs for cache write errors
  • Manually flush keys: redis-cli DEL rate:USD:EUR
  • Set maxmemory and maxmemory-policy in redis.conf
  • Monitor key count: redis-cli DBSIZE
  • Check for keys without TTL: redis-cli KEYS * | xargs redis-cli TTL
  • Review eviction stats: redis-cli INFO stats

Build docs developers (and LLMs) love