The Avala API enforces rate limits to ensure fair usage and system stability. The SDK provides tools to monitor rate limit status and handle rate limit errors.
Every API response includes rate limit information in the headers:
Header Description X-RateLimit-LimitMaximum number of requests allowed in the time window X-RateLimit-RemainingNumber of requests remaining in the current window X-RateLimit-ResetTimestamp when the rate limit window resets
Checking rate limit status
Access current rate limit information after any request:
const datasets = await avala . datasets . list ();
// Get rate limit info from the last request
const rateLimit = avala . rateLimitInfo ;
console . log ( 'Limit:' , rateLimit . limit );
console . log ( 'Remaining:' , rateLimit . remaining );
console . log ( 'Reset:' , rateLimit . reset );
RateLimitInfo interface
The rate limit information is typed:
export interface RateLimitInfo {
limit : string | null ;
remaining : string | null ;
reset : string | null ;
}
Rate limit values are returned as strings from the API headers. Convert them to numbers for calculations.
Monitoring rate limits
Implement proactive monitoring to avoid hitting limits:
Basic monitoring
With throttling
async function monitoredRequest () {
const datasets = await avala . datasets . list ();
const rateLimit = avala . rateLimitInfo ;
const remaining = parseInt ( rateLimit . remaining || '0' );
const limit = parseInt ( rateLimit . limit || '0' );
if ( remaining < limit * 0.1 ) {
console . warn ( 'Rate limit nearly exhausted!' );
console . warn ( ` ${ remaining } / ${ limit } requests remaining` );
}
return datasets ;
}
Handling rate limit errors
When you exceed the rate limit, the API returns a 429 status code:
import { RateLimitError } from '@avala-ai/sdk' ;
try {
const datasets = await avala . datasets . list ();
} catch ( error ) {
if ( error instanceof RateLimitError ) {
console . error ( 'Rate limit exceeded' );
console . error ( 'Retry after:' , error . retryAfter , 'seconds' );
}
}
RateLimitError properties
The error includes retry timing information:
export class RateLimitError extends AvalaError {
public readonly retryAfter : number | null ;
constructor ( message : string , body ?: unknown , retryAfter ?: number | null ) {
super ( message , 429 , body );
this . name = "RateLimitError" ;
this . retryAfter = retryAfter ?? null ;
}
}
The retryAfter property may be null if the server doesn’t provide a Retry-After header.
Implementing retry logic
Detect rate limit errors
Check for RateLimitError when making requests: import { RateLimitError } from '@avala-ai/sdk' ;
try {
return await avala . datasets . list ();
} catch ( error ) {
if ( error instanceof RateLimitError ) {
// Handle rate limit
}
throw error ;
}
Calculate wait time
Use retryAfter or calculate from reset timestamp: if ( error instanceof RateLimitError ) {
let waitTime : number ;
if ( error . retryAfter ) {
// Use Retry-After header
waitTime = error . retryAfter * 1000 ;
} else {
// Calculate from rate limit reset
const info = avala . rateLimitInfo ;
const reset = parseInt ( info . reset || '0' );
waitTime = reset - Date . now ();
}
console . log ( `Waiting ${ waitTime } ms before retry` );
}
Wait and retry
Implement exponential backoff for reliability: async function withRetry < T >(
fn : () => Promise < T >,
maxRetries = 3
) : Promise < T > {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await fn ();
} catch ( error ) {
if ( error instanceof RateLimitError ) {
if ( i === maxRetries - 1 ) throw error ;
const waitTime = error . retryAfter
? error . retryAfter * 1000
: Math . pow ( 2 , i ) * 1000 ; // Exponential backoff
console . log ( `Retry ${ i + 1 } / ${ maxRetries } after ${ waitTime } ms` );
await new Promise ( resolve => setTimeout ( resolve , waitTime ));
continue ;
}
throw error ;
}
}
throw new Error ( 'Max retries exceeded' );
}
const datasets = await withRetry (() => avala . datasets . list ());
The SDK automatically extracts rate limit headers from every response:
private extractRateLimitHeaders ( response : Response ): void {
if ( ! response . headers ) return ;
this . _lastRateLimit = {
limit: response . headers . get ( "X-RateLimit-Limit" ),
remaining: response . headers . get ( "X-RateLimit-Remaining" ),
reset: response . headers . get ( "X-RateLimit-Reset" ),
};
}
This happens automatically during the request lifecycle:
this . extractRateLimitHeaders ( response );
Best practices
Monitor proactively
Check rate limit status before making requests: async function smartRequest () {
const info = avala . rateLimitInfo ;
const remaining = parseInt ( info . remaining || '100' );
if ( remaining < 5 ) {
console . warn ( 'Rate limit running low, consider delaying' );
}
return await avala . datasets . list ();
}
Implement backoff strategies
Use exponential backoff for retries: async function exponentialBackoff < T >(
fn : () => Promise < T >,
maxRetries = 5
) : Promise < T > {
for ( let attempt = 0 ; attempt < maxRetries ; attempt ++ ) {
try {
return await fn ();
} catch ( error ) {
if ( error instanceof RateLimitError && attempt < maxRetries - 1 ) {
const delay = Math . min (
Math . pow ( 2 , attempt ) * 1000 ,
60000 // Max 60 seconds
);
await new Promise ( resolve => setTimeout ( resolve , delay ));
continue ;
}
throw error ;
}
}
throw new Error ( 'All retries exhausted' );
}
Batch requests efficiently
Group operations to reduce API calls: // Good: Batch process
async function processDatasets ( uids : string []) {
const results = [];
for ( const uid of uids ) {
const dataset = await avala . datasets . get ( uid );
results . push ( dataset );
// Add small delay between requests
await new Promise ( resolve => setTimeout ( resolve , 100 ));
}
return results ;
}
// Better: Use pagination when possible
async function getAllDatasets () {
let cursor : string | null = null ;
const datasets = [];
do {
const page = await avala . datasets . list ({ cursor });
datasets . push ( ... page . items );
cursor = page . nextCursor ;
} while ( cursor !== null );
return datasets ;
}
Cache responses
Reduce API calls by caching data: class CachedClient {
private cache = new Map < string , { data : any ; expiry : number }>();
private avala : Avala ;
constructor ( apiKey : string ) {
this . avala = new Avala ({ apiKey });
}
async getDataset ( uid : string ) {
const cached = this . cache . get ( uid );
if ( cached && cached . expiry > Date . now ()) {
return cached . data ;
}
const dataset = await this . avala . datasets . get ( uid );
this . cache . set ( uid , {
data: dataset ,
expiry: Date . now () + 60000 // Cache for 1 minute
});
return dataset ;
}
}
Combine rate limit monitoring with caching and batching for the most efficient API usage.
Understanding reset times
The reset timestamp indicates when the rate limit window resets:
const info = avala . rateLimitInfo ;
const resetTime = parseInt ( info . reset || '0' );
const now = Date . now ();
if ( resetTime > now ) {
const secondsUntilReset = Math . ceil (( resetTime - now ) / 1000 );
console . log ( `Rate limit resets in ${ secondsUntilReset } seconds` );
}
The reset timestamp is in milliseconds since Unix epoch. Convert it to a Date object for display: new Date(parseInt(info.reset || '0')).