Skip to main content
Migrating to Caffeine is straightforward thanks to its familiar API design. This guide covers migration from popular caching solutions with before/after examples.

Why Migrate to Caffeine?

Better Performance

20-30% higher throughput and better latency than alternatives

Superior Hit Rates

W-TinyLFU eviction policy achieves 20-30% better hit rates than LRU

Modern API

Java 8+ features including lambdas, CompletableFuture, and streams

Active Development

Regular updates, bug fixes, and new features

Migration from Guava Cache

Caffeine provides a Guava adapter for seamless migration.

Basic Cache Migration

import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
import java.util.concurrent.TimeUnit;

Cache<Key, Value> cache = CacheBuilder.newBuilder()
    .maximumSize(10_000)
    .expireAfterWrite(10, TimeUnit.MINUTES)
    .recordStats()
    .build();

// Get with manual loading
Value value = cache.getIfPresent(key);
if (value == null) {
  value = loadValue(key);
  cache.put(key, value);
}
The basic API is nearly identical! Main difference: Caffeine uses Duration instead of TimeUnit for a more modern API.

Loading Cache Migration

import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;

LoadingCache<Key, Value> cache = CacheBuilder.newBuilder()
    .maximumSize(10_000)
    .build(new CacheLoader<Key, Value>() {
      @Override
      public Value load(Key key) throws Exception {
        return loadValue(key);
      }
      
      @Override
      public Map<Key, Value> loadAll(Iterable<? extends Key> keys) {
        return loadValues(keys);
      }
    });

// Automatic loading
Value value = cache.get(key);
Map<Key, Value> values = cache.getAll(keys);

Lambda-Based Loading (Modern Style)

LoadingCache<Key, Value> cache = CacheBuilder.newBuilder()
    .maximumSize(10_000)
    .build(CacheLoader.from(key -> loadValue(key)));
Caffeine’s builder accepts lambda expressions directly, eliminating the need for anonymous inner classes or adapter methods.

Removal Listener Migration

import com.google.common.cache.RemovalListener;
import com.google.common.cache.RemovalNotification;

Cache<Key, Value> cache = CacheBuilder.newBuilder()
    .removalListener(new RemovalListener<Key, Value>() {
      @Override
      public void onRemoval(RemovalNotification<Key, Value> notification) {
        Key key = notification.getKey();
        Value value = notification.getValue();
        RemovalCause cause = notification.getCause();
        handleRemoval(key, value, cause);
      }
    })
    .build();

Using the Guava Adapter

For gradual migration, use Caffeine’s Guava adapter:
1

Add Guava adapter dependency

<dependency>
  <groupId>com.github.ben-manes.caffeine</groupId>
  <artifactId>guava</artifactId>
  <version>3.2.3</version>
</dependency>
2

Wrap Caffeine cache as Guava cache

import com.github.benmanes.caffeine.guava.CaffeinatedGuava;

// Create Caffeine cache
com.github.benmanes.caffeine.cache.Cache<Key, Value> caffeineCache = 
    Caffeine.newBuilder()
        .maximumSize(10_000)
        .build();

// Wrap as Guava cache
com.google.common.cache.Cache<Key, Value> guavaCache = 
    CaffeinatedGuava.build(caffeineCache);

// Use Guava API
guavaCache.put(key, value);
Value value = guavaCache.getIfPresent(key);
3

Migrate incrementally

Replace Guava caches one at a time while maintaining the same API.Once migration is complete, remove the adapter and use Caffeine’s API directly.
The Guava adapter provides 100% API compatibility, making it perfect for large codebases where immediate full migration isn’t feasible.

Migration from ConcurrentHashMap

If you’re using ConcurrentHashMap as a cache without eviction, Caffeine provides significant benefits.

Basic Map to Cache

import java.util.concurrent.ConcurrentHashMap;

ConcurrentHashMap<Key, Value> map = new ConcurrentHashMap<>();

// Manual get-or-compute
Value value = map.get(key);
if (value == null) {
  value = loadValue(key);
  Value existing = map.putIfAbsent(key, value);
  if (existing != null) {
    value = existing;
  }
}

// Or using computeIfAbsent
Value value = map.computeIfAbsent(key, k -> loadValue(k));

Adding Eviction

// Manual size-based eviction
ConcurrentHashMap<Key, CacheEntry> map = new ConcurrentHashMap<>();
AtomicInteger size = new AtomicInteger();

void put(Key key, Value value) {
  map.put(key, new CacheEntry(value, System.currentTimeMillis()));
  
  if (size.incrementAndGet() > MAX_SIZE) {
    // Manual eviction - inefficient!
    map.entrySet().stream()
        .min(Comparator.comparingLong(e -> e.getValue().timestamp))
        .ifPresent(e -> {
          map.remove(e.getKey());
          size.decrementAndGet();
        });
  }
}
  • Optimal eviction policy: W-TinyLFU vs. simple timestamp-based
  • O(1) complexity: No need to scan entire map
  • Better hit rates: 20-30% improvement over LRU
  • Automatic: No manual bookkeeping
  • Concurrent: Thread-safe without explicit synchronization

Adding Expiration

static class CacheEntry {
  Value value;
  long timestamp;
  
  boolean isExpired() {
    return System.currentTimeMillis() - timestamp > EXPIRATION_MS;
  }
}

ConcurrentHashMap<Key, CacheEntry> map = new ConcurrentHashMap<>();

Value get(Key key) {
  CacheEntry entry = map.get(key);
  if (entry == null || entry.isExpired()) {
    if (entry != null) {
      map.remove(key);  // Manual cleanup
    }
    return null;
  }
  return entry.value;
}

// Need periodic cleanup task
scheduledExecutor.scheduleAtFixedRate(() -> {
  map.entrySet().removeIf(e -> e.getValue().isExpired());
}, 1, 1, TimeUnit.MINUTES);

Migration from Ehcache

import org.ehcache.Cache;
import org.ehcache.CacheManager;
import org.ehcache.config.builders.*;
import org.ehcache.expiry.Expirations;

CacheManager cacheManager = CacheManagerBuilder.newCacheManagerBuilder()
    .build(true);

Cache<Key, Value> cache = cacheManager.createCache("myCache",
    CacheConfigurationBuilder.newCacheConfigurationBuilder(
        Key.class, Value.class,
        ResourcePoolsBuilder.heap(10_000))
        .withExpiry(Expirations.timeToLiveExpiration(
            Duration.ofMinutes(10)))
        .build());

// Usage
cache.put(key, value);
Value value = cache.get(key);
Caffeine’s API is significantly simpler than Ehcache while providing better performance. No separate CacheManager needed.

Migration from JSR-107 (JCache)

Caffeine supports JSR-107 through an adapter:
1

Add JCache adapter dependency

<dependency>
  <groupId>com.github.ben-manes.caffeine</groupId>
  <artifactId>jcache</artifactId>
  <version>3.2.3</version>
</dependency>
<dependency>
  <groupId>javax.cache</groupId>
  <artifactId>cache-api</artifactId>
  <version>1.1.1</version>
</dependency>
2

Configure Caffeine as JCache provider

import javax.cache.Caching;
import javax.cache.CacheManager;
import javax.cache.configuration.MutableConfiguration;

CacheManager cacheManager = Caching.getCachingProvider(
    "com.github.benmanes.caffeine.jcache.spi.CaffeineCachingProvider"
).getCacheManager();

MutableConfiguration<Key, Value> config = 
    new MutableConfiguration<Key, Value>()
        .setTypes(Key.class, Value.class)
        .setStatisticsEnabled(true);

javax.cache.Cache<Key, Value> cache = 
    cacheManager.createCache("myCache", config);
3

Use standard JCache API

// Standard JSR-107 API
cache.put(key, value);
Value value = cache.get(key);
cache.remove(key);
The JCache adapter allows you to use Caffeine with JCache-compatible frameworks like Spring Cache.

Common Migration Patterns

Pattern 1: Simple Cache

// Any library → Caffeine
Cache<Key, Value> cache = Caffeine.newBuilder()
    .maximumSize(10_000)
    .build();

cache.put(key, value);
Value value = cache.getIfPresent(key);

Pattern 2: Loading Cache

// Any library → Caffeine
LoadingCache<Key, Value> cache = Caffeine.newBuilder()
    .maximumSize(10_000)
    .build(key -> loadValue(key));

Value value = cache.get(key);  // Auto-loads if missing

Pattern 3: Async Loading Cache

// Caffeine's unique async capabilities
AsyncLoadingCache<Key, Value> cache = Caffeine.newBuilder()
    .maximumSize(10_000)
    .buildAsync((key, executor) -> 
        CompletableFuture.supplyAsync(() -> loadValue(key), executor));

CompletableFuture<Value> future = cache.get(key);

Pattern 4: Write-Through Cache

// Using removal listener for write-through
Cache<Key, Value> cache = Caffeine.newBuilder()
    .maximumSize(10_000)
    .removalListener((key, value, cause) -> {
      if (cause.wasEvicted()) {
        writeToDatabase(key, value);
      }
    })
    .build();

API Mapping Reference

Core Operations

OperationGuavaCaffeine
Get if presentgetIfPresent(key)getIfPresent(key)
Get with loadget(key)get(key)
Get with functionget(key, callable)get(key, function)
Putput(key, value)put(key, value)
Put allputAll(map)putAll(map)
Invalidateinvalidate(key)invalidate(key)
Invalidate allinvalidateAll()invalidateAll()
Sizesize()estimatedSize()
Statsstats()stats()

Configuration

FeatureGuavaCaffeine
Max sizemaximumSize(long)maximumSize(long)
Max weightmaximumWeight(long)maximumWeight(long)
Weigherweigher(Weigher)weigher(Weigher)
Expire after writeexpireAfterWrite(duration, unit)expireAfterWrite(Duration)
Expire after accessexpireAfterAccess(duration, unit)expireAfterAccess(Duration)
Custom expiryN/AexpireAfter(Expiry)
RefreshrefreshAfterWrite(duration, unit)refreshAfterWrite(Duration)
Weak keysweakKeys()weakKeys()
Weak valuesweakValues()weakValues()
Soft valuessoftValues()softValues()
StatsrecordStats()recordStats()
Removal listenerremovalListener(listener)removalListener(listener)
Most method names are identical. Main differences are using Duration instead of TimeUnit and estimatedSize() instead of size().

Migration Checklist

1

Update dependencies

Replace old caching library with Caffeine in your build file.
<dependency>
  <groupId>com.github.ben-manes.caffeine</groupId>
  <artifactId>caffeine</artifactId>
  <version>3.2.3</version>
</dependency>
2

Update imports

Change package imports from old library to Caffeine.
// Before
import com.google.common.cache.*;

// After
import com.github.benmanes.caffeine.cache.*;
3

Update builder calls

Change builder syntax if needed (mainly time units).
// Before
.expireAfterWrite(10, TimeUnit.MINUTES)

// After
.expireAfterWrite(Duration.ofMinutes(10))
4

Update size() calls

Caffeine uses estimatedSize() for better performance.
// Before
long size = cache.size();

// After
long size = cache.estimatedSize();
5

Test thoroughly

Run your test suite to ensure correct behavior.Pay special attention to:
  • Concurrency tests
  • Eviction behavior
  • Statistics accuracy
6

Measure performance

Benchmark before and after to quantify improvements.You should see:
  • Higher throughput
  • Better hit rates
  • Lower latencies

Troubleshooting

Caffeine uses estimatedSize() which may not reflect the exact count due to concurrent operations and pending cleanup.Use cleanUp() before checking size if exact count is needed:
cache.cleanUp();
long size = cache.estimatedSize();
Removal listeners are invoked asynchronously by default. For synchronous execution:
Cache<Key, Value> cache = Caffeine.newBuilder()
    .executor(Runnable::run)  // Synchronous execution
    .removalListener((k, v, cause) -> handleRemoval(k, v, cause))
    .build();
Statistics are eventually consistent. For accurate stats:
cache.cleanUp();  // Force pending operations
CacheStats stats = cache.stats();
Caffeine uses W-TinyLFU instead of LRU. This is intentional and provides better hit rates, but the specific entries evicted may differ.If you need exact LRU behavior (not recommended), consider using a different library or implementing a custom policy.

Getting Help

If you encounter issues during migration:

GitHub Issues

Search existing issues or create a new one

Stack Overflow

Ask questions with the caffeine tag

API Documentation

Browse the complete API reference

GitHub Discussions

Community discussions and help

Next Steps

Architecture

Understand Caffeine’s internal design

Efficiency

Learn about W-TinyLFU eviction policy

Benchmarks

See performance comparisons

Quickstart

Build your first Caffeine cache

Build docs developers (and LLMs) love