Overview
The Pump.fun API supports ETag-based HTTP caching to reduce bandwidth usage and improve application performance. When content hasn’t changed, the API returns a 304 Not Modified response instead of the full data, saving both bandwidth and API rate limits.
Implementing proper caching can significantly reduce your API usage and help you stay within rate limits while providing faster responses to your users.
How ETag Caching Works
ETags (Entity Tags) are unique identifiers assigned to specific versions of resources. The caching workflow follows these steps:
Initial Request: You make a request to an endpoint
Server Response: The API returns data with an ETag header
Store ETag: Your application stores the ETag value
Subsequent Request: Include the ETag in an If-None-Match header
Server Check: The API compares the ETag to the current resource version
Response:
If unchanged: 304 Not Modified (no body, use cached data)
If changed: 200 OK with new data and updated ETag
When you request a cacheable resource, the API includes an ETag header:
HTTP / 1.1 200 OK
Content-Type : application/json
ETag : W/"abc123def456"
{
"data" : "..."
}
Unique identifier for the current version of the resource. The W/ prefix indicates a “weak” ETag.
Using If-None-Match
To check if content has changed, include the stored ETag in the If-None-Match header:
GET /coins/{mint} HTTP / 1.1
Host : frontend-api-v3.pump.fun
Authorization : Bearer <your_jwt_token>
Accept : application/json
If-None-Match : W/"abc123def456"
Content Unchanged (304)
If the resource hasn’t changed, the API returns:
HTTP / 1.1 304 Not Modified
ETag : W/"abc123def456"
No response body is included. Use your cached data.
Content Changed (200)
If the resource has changed, the API returns:
HTTP / 1.1 200 OK
Content-Type : application/json
ETag : W/"xyz789ghi012"
{
"data" : "... updated content ..."
}
Store the new ETag and update your cache.
Implementation Examples
Basic Caching
import requests
class CachedAPIClient :
def __init__ ( self , token ):
self .base_url = "https://frontend-api-v3.pump.fun"
self .token = token
self .cache = {} # {url: {"etag": "...", "data": {...}}}
def get ( self , endpoint ):
url = f " { self .base_url }{ endpoint } "
headers = {
"Authorization" : f "Bearer { self .token } " ,
"Accept" : "application/json"
}
# Add If-None-Match header if we have a cached ETag
cached = self .cache.get(url)
if cached and "etag" in cached:
headers[ "If-None-Match" ] = cached[ "etag" ]
response = requests.get(url, headers = headers)
# Handle 304 Not Modified
if response.status_code == 304 :
print ( f "Cache hit for { endpoint } " )
return cached[ "data" ]
# Handle 200 OK - update cache
if response.status_code == 200 :
data = response.json()
etag = response.headers.get( "ETag" )
if etag:
self .cache[url] = { "etag" : etag, "data" : data}
print ( f "Cached { endpoint } with ETag { etag } " )
return data
response.raise_for_status()
# Usage
client = CachedAPIClient( '<your_jwt_token>' )
# First request - fetches from API
data1 = client.get( '/coins/ {mint} ' )
# Second request - returns cached data if unchanged
data2 = client.get( '/coins/ {mint} ' )
Advanced Caching with Expiration
import requests
import time
from datetime import datetime, timedelta
class AdvancedCache :
def __init__ ( self , token , ttl_seconds = 300 ):
self .base_url = "https://frontend-api-v3.pump.fun"
self .token = token
self .ttl = ttl_seconds
self .cache = {} # {url: {"etag": ..., "data": ..., "timestamp": ...}}
def get ( self , endpoint , force_refresh = False ):
url = f " { self .base_url }{ endpoint } "
headers = {
"Authorization" : f "Bearer { self .token } " ,
"Accept" : "application/json"
}
# Check if cache is valid
cached = self .cache.get(url)
if cached and not force_refresh:
age = time.time() - cached[ "timestamp" ]
# If cache is too old, don't use If-None-Match
if age > self .ttl:
print ( f "Cache expired for { endpoint } (age: { age :.1f} s)" )
else :
headers[ "If-None-Match" ] = cached[ "etag" ]
response = requests.get(url, headers = headers)
# Handle 304 Not Modified
if response.status_code == 304 :
# Update timestamp but keep existing data
self .cache[url][ "timestamp" ] = time.time()
print ( f "Cache hit for { endpoint } " )
return cached[ "data" ]
# Handle 200 OK
if response.status_code == 200 :
data = response.json()
etag = response.headers.get( "ETag" )
if etag:
self .cache[url] = {
"etag" : etag,
"data" : data,
"timestamp" : time.time()
}
print ( f "Updated cache for { endpoint } " )
return data
response.raise_for_status()
def invalidate ( self , endpoint = None ):
"""Invalidate cache for specific endpoint or all endpoints"""
if endpoint:
url = f " { self .base_url }{ endpoint } "
if url in self .cache:
del self .cache[url]
print ( f "Invalidated cache for { endpoint } " )
else :
self .cache.clear()
print ( "Invalidated all cache" )
def cache_stats ( self ):
"""Get cache statistics"""
total = len ( self .cache)
now = time.time()
valid = sum ( 1 for c in self .cache.values() if now - c[ "timestamp" ] < self .ttl)
return { "total_entries" : total, "valid_entries" : valid}
# Usage
cache = AdvancedCache( '<your_jwt_token>' , ttl_seconds = 300 )
# Fetch data (will be cached)
data = cache.get( '/coins/ {mint} ' )
# Subsequent requests use cache
data = cache.get( '/coins/ {mint} ' )
# Force refresh
data = cache.get( '/coins/ {mint} ' , force_refresh = True )
# Check cache stats
stats = cache.cache_stats()
print ( f "Cache stats: { stats } " )
# Invalidate specific endpoint
cache.invalidate( '/coins/ {mint} ' )
Cache Best Practices
Always check for 304 responses
When you receive a 304 status code, use your cached data. Never treat 304 as an error.
Different endpoints and parameters have different ETags. Store them separately for each unique URL.
Even with ETags, implement a time-to-live (TTL) for cache entries. Stale data older than your TTL should trigger a full refresh.
Not all endpoints support ETags. Design your cache to gracefully handle responses without ETag headers.
Invalidate cache on mutations
When you POST, PUT, or DELETE resources, invalidate related cache entries to ensure consistency.
Implement cache size limits
Prevent unbounded cache growth by implementing an LRU (Least Recently Used) eviction policy.
Cache Invalidation
Invalidate cache entries when:
After write operations: Clear cache after creating, updating, or deleting resources
On authentication changes: Clear user-specific cache when logging in/out
On explicit refresh: Provide users with a manual refresh option
After TTL expiration: Automatically remove stale cache entries
# Invalidate cache after mutation
def create_coin ( client , data ):
response = requests.post(
"https://frontend-api-v3.pump.fun/coins" ,
headers = client.headers,
json = data
)
if response.status_code == 201 :
# Invalidate related cache entries
client.cache.invalidate( '/coins' )
client.cache.invalidate( '/coins/latest' )
return response.json()
Benefits of Caching
Proper caching implementation provides multiple benefits for your application and API usage.
Reduced Bandwidth
304 responses contain no body data, significantly reducing bandwidth usage especially for large responses.
Lower Rate Limit Usage
Some API implementations don’t count 304 responses against rate limits, allowing more effective requests.
Faster Response Times
Cached data can be returned immediately without waiting for network requests, improving user experience.
Better Reliability
Cached data can serve as fallback when the API is temporarily unavailable.
Track cache effectiveness with metrics:
class CacheMetrics :
def __init__ ( self ):
self .hits = 0
self .misses = 0
self .total_requests = 0
def record_hit ( self ):
self .hits += 1
self .total_requests += 1
def record_miss ( self ):
self .misses += 1
self .total_requests += 1
def hit_rate ( self ):
if self .total_requests == 0 :
return 0
return self .hits / self .total_requests
def stats ( self ):
return {
"hits" : self .hits,
"misses" : self .misses,
"total" : self .total_requests,
"hit_rate" : f " { self .hit_rate() :.2%} "
}
# Usage
metrics = CacheMetrics()
# In your cache client
if response.status_code == 304 :
metrics.record_hit()
else :
metrics.record_miss()
print (metrics.stats())
# Output: {'hits': 75, 'misses': 25, 'total': 100, 'hit_rate': '75.00%'}
Common Pitfalls
Avoid these common caching mistakes that can lead to stale data or poor performance.
Don’t Cache User-Specific Data Globally
Always scope cache entries to the authenticated user:
# Bad: Shares cache across users
cache_key = f "/users/profile"
# Good: User-specific cache keys
cache_key = f "/users/profile?user= { user_id } "
Don’t Ignore Query Parameters
Different query parameters should have separate cache entries:
# Bad: Same cache key for different queries
cache_key = "/coins"
# Good: Include query parameters in cache key
cache_key = f "/coins?limit= { limit } &offset= { offset } "
Don’t Cache Error Responses
Only cache successful responses:
if response.status_code == 200 :
# Cache successful response
cache[url] = { "etag" : etag, "data" : data}
else :
# Don't cache errors
pass