Overview
GweAI uses a sophisticated RPC provider system built with Viem that includes automatic fallback, intelligent caching, and rate limiting to ensure reliable blockchain connectivity.
Architecture
The RPC system provides:
Automatic Fallback : Switches between 3 RPC endpoints on failure
Smart Caching : 10-second cache for frequently accessed data
Rate Limiting : 50 requests per 10 seconds with automatic throttling
Retry Logic : Exponential backoff with up to 3 retries
Block Number Caching : 2-second cache matching Base’s block time
Public Client
getPublicClient()
Returns a singleton Viem public client with fallback transport:
import { getPublicClient } from './utils/rpcProvider' ;
const client = getPublicClient ();
// Read contract data
const balance = await client . readContract ({
address: '0x...' ,
abi: erc20Abi ,
functionName: 'balanceOf' ,
args: [ userAddress ],
});
Type Signature:
function getPublicClient () : PublicClient
Features:
Singleton pattern (created once, reused)
Fallback across 3 RPC endpoints
Batch multicall support enabled
4-second RPC response cache
Configuration
chain
Chain
default: "baseSepolia"
Blockchain network (Base Sepolia testnet)
Fallback transport with multiple HTTP endpoints
Enables batching multiple contract calls into single RPC request
Cache duration for RPC responses in milliseconds
RPC Endpoints
Ordered by reliability:
Alchemy (Primary)
https://base-sepolia.g.alchemy.com/v2/-mGklZw8tTiO9fg9sRGQP
Rate Limits : 3M compute units/month (~300k requests)Timeout : 5 secondsRetries : 2 attempts with 1 second delay
BlockPI (Secondary)
https://base-sepolia.blockpi.network/v1/rpc/public
Rate Limits : 10M requests/dayTimeout : 5 secondsRetries : 2 attempts with 1 second delay
PublicNode (Tertiary)
https://base-sepolia-rpc.publicnode.com
Rate Limits : Best effort, no guaranteesTimeout : 5 secondsRetries : 2 attempts with 1 second delay
The transport uses rank: false to maintain priority order, trying the primary Alchemy endpoint first.
Caching System
RPCCache Class
In-memory cache for RPC responses with automatic expiration:
class RPCCache {
private cache = new Map < string , CacheEntry < any >>();
private readonly TTL = 10000 ; // 10 seconds
set < T >( key : string , data : T ) : void
get < T >( key : string ) : T | null
clear () : void
}
Time-to-live for cached entries in milliseconds (10 seconds)
Usage Example:
Automatic Caching
Manual Cache Control
import { safeRPCCall , rpcCache } from './utils/rpcProvider' ;
// Cached for 10 seconds
const balance = await safeRPCCall (
() => client . getBalance ({ address: userAddress }),
3 ,
`balance- ${ userAddress } `
);
Block Number Cache
Optimized caching for current block number:
export const getCurrentBlock = async () : Promise < bigint >
Cache duration matching Base’s 2-second block time
Type Signature:
function getCurrentBlock () : Promise < bigint >
Example:
import { getCurrentBlock } from './utils/rpcProvider' ;
const blockNumber = await getCurrentBlock ();
console . log ( `Current block: ${ blockNumber } ` );
Rate Limiting
RateLimiter Class
Prevents exceeding RPC provider limits:
class RateLimiter {
private readonly maxRequests = 50 ;
private readonly timeWindow = 10000 ; // 10 seconds
canMakeRequest () : boolean
getWaitTime () : number
}
Maximum requests allowed within time window
Time window for rate limit in milliseconds (10 seconds)
Behavior:
Tracks timestamps of recent requests
Automatically removes expired requests
Returns false when limit exceeded
Calculates wait time until next available slot
When rate limited, safeRPCCall automatically waits until the next available slot before making the request.
Safe RPC Call Wrapper
safeRPCCall()
Wrapper function with retry logic, caching, and rate limiting:
export async function safeRPCCall < T >(
fn : () => Promise < T >,
maxRetries = 3 ,
cacheKey ?: string
) : Promise < T >
Async function that makes the RPC call
Maximum retry attempts on failure
Optional cache key for storing/retrieving cached results
Features:
Checks cache first (if cacheKey provided)
Applies rate limiting
Retries with exponential backoff on failure
Caches successful results
Returns result or throws error after all retries
Retry Strategy:
Delay = min(1000 × 2^attempt, 5000)ms
Attempt 1: 1000ms delay
Attempt 2: 2000ms delay
Attempt 3: 4000ms delay
Basic Usage
With Caching
Custom Retries
import { safeRPCCall , getPublicClient } from './utils/rpcProvider' ;
const client = getPublicClient ();
const data = await safeRPCCall (
() => client . readContract ({
address: contractAddress ,
abi: contractAbi ,
functionName: 'getData' ,
})
);
Preloading Critical Data
preloadCriticalData()
Initializes RPC client and preloads essential blockchain data:
export const preloadCriticalData = async () : Promise < void >
What it preloads:
Initializes singleton public client
Fetches current block number
Warms up RPC connection
Usage:
import { preloadCriticalData } from './utils/rpcProvider' ;
// Call on app startup
await preloadCriticalData ();
console . log ( '✅ RPC client ready' );
Call preloadCriticalData() during app initialization to reduce latency on first user interaction.
Implementation Example
Complete RPC provider setup:
import { createPublicClient , http , fallback } from 'viem' ;
import { baseSepolia } from 'viem/chains' ;
const RPC_ENDPOINTS = [
'https://base-sepolia.g.alchemy.com/v2/-mGklZw8tTiO9fg9sRGQP' ,
'https://base-sepolia.blockpi.network/v1/rpc/public' ,
'https://base-sepolia-rpc.publicnode.com' ,
];
export const getPublicClient = () => {
return createPublicClient ({
chain: baseSepolia ,
transport: fallback (
RPC_ENDPOINTS . map ( url => http ( url , {
timeout: 5000 ,
retryCount: 2 ,
retryDelay: 1000 ,
})),
{ rank: false }
),
batch: {
multicall: true ,
},
cacheTime: 4000 ,
});
};
Fallback Strategy
The RPC fallback strategy from RPC_FIX_GUIDE.md ensures 99.9% uptime:
Primary RPC Attempt
Request sent to Alchemy (best rate limits)
Automatic Failover
If primary fails (403, timeout, etc.), instantly tries BlockPI
Final Fallback
If secondary fails, falls back to PublicNode
Retry Logic
Each endpoint retries 2 times with 1-second delay before moving to next
Performance Improvements from fallback strategy:
40% reduction in RPC calls (smart caching)
99.9% uptime (4 fallback providers)
Lower bandwidth usage (cached blocks & balances)
Type Definitions
import type { PublicClient } from 'viem' ;
export function getPublicClient () : PublicClient ;
export function getCurrentBlock () : Promise < bigint >;
export function safeRPCCall < T >(
fn : () => Promise < T >,
maxRetries ?: number ,
cacheKey ?: string
) : Promise < T >;
export function preloadCriticalData () : Promise < void >;
export const rpcCache : RPCCache ;
export const rateLimiter : RateLimiter ;
Monitoring & Debugging
Console Logs
The RPC provider outputs helpful logs:
✅ Public client initialized with 3 fallback providers
⏱️ Rate limited, waiting 2000ms
⚠️ RPC call attempt 1/3 failed: [error details]
Testing RPC Health
Check RPC Availability
Monitor Rate Limits
const testRPC = async ( url : string ) => {
const response = await fetch ( url , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({
jsonrpc: '2.0' ,
method: 'eth_blockNumber' ,
params: [],
id: 1 ,
}),
});
return response . ok ;
};
Wagmi Configuration Wallet connectors and Wagmi setup
Contract Addresses Verified smart contract addresses
Best Practices
Production : Replace demo API keys with production keys from Alchemy for higher rate limits (3M → 300M compute units/month).
Always use safeRPCCall wrapper for production code to get automatic retry logic, caching, and rate limiting.
The singleton pattern ensures only one public client instance exists, reducing memory usage and connection overhead.