Fluxer implements rate limiting to prevent abuse and ensure fair resource allocation. The system uses the GCRA (Generic Cell Rate Algorithm) for smooth, accurate rate limiting with minimal memory overhead.
Rate Limit Algorithm
Fluxer uses the GCRA algorithm , which provides:
Smooth rate limiting - No burst allowances that can be abused
Memory efficient - Requires only 2 values per bucket
Accurate - Precise to the millisecond
Distributed - Works across multiple instances via shared cache
How GCRA Works
GCRA tracks:
Theoretical Arrival Time (TAT) - When the next request should be allowed
Current Time - When the request was made
A request is allowed if: current_time >= TAT - window_ms
Rate Limit Buckets
Rate limits are organized into buckets based on the resource being accessed:
Bucket Types
Global Buckets
Resource Buckets
User Buckets
Applied across all endpoints for an IP or user: 'auth:register' // Registration attempts
'auth:login' // Login attempts
'auth:forgot' // Password reset requests
Scoped to specific resources (channels, guilds): 'channel:message:create::channel_id' // Per-channel message sending
'guild:update::guild_id' // Per-guild updates
'channel:typing::channel_id' // Per-channel typing indicators
Per-user limits across resources: 'user:update' // User settings updates
'user:connections' // Connection management
Common Rate Limits
Authentication
Endpoint Limit Window Bucket Register 10 10 seconds auth:registerLogin 10 10 seconds auth:loginLogin MFA 5 1 minute auth:login:mfaForgot Password 5 1 minute auth:forgotReset Password 10 1 minute auth:resetVerify Email 10 1 minute auth:verifyLogout 20 10 seconds auth:logout
Channels
Operation Limit Window Bucket Get Channel 100 10 seconds channel:read::channel_idSend Message 20 10 seconds channel:message:create::channel_idEdit Message 20 10 seconds channel:message:update::channel_idDelete Message 20 10 seconds channel:message:delete::channel_idBulk Delete 10 10 seconds channel:message:bulk_delete::channel_idAdd Reaction 30 10 seconds channel:reactions::channel_idTyping Indicator 20 10 seconds channel:typing::channel_idGet Messages 100 10 seconds channel:messages:read::channel_idPin Message 20 10 seconds channel:pins::channel_id
Voice
Operation Limit Window Bucket Get Call 60 10 seconds channel:call:get::channel_idUpdate Call 10 10 seconds channel:call:update::channel_idRing 5 10 seconds channel:call:ring::channel_idStop Ringing 20 10 seconds channel:call:stop_ringing::channel_id
Multi-Factor Authentication
Operation Limit Window Bucket Enable SMS MFA 10 1 minute mfa:sms:enableDisable SMS MFA 10 1 minute mfa:sms:disableWebAuthn Registration 10 1 minute mfa:webauthn:registerWebAuthn List 40 10 seconds mfa:webauthn:listWebAuthn Delete 10 1 minute mfa:webauthn:delete
Phone Verification
Operation Limit Window Bucket Send Verification 5 1 minute phone:send_verificationVerify Code 10 1 minute phone:verify_codeAdd Phone 10 1 minute phone:addRemove Phone 10 1 minute phone:remove
Fluxer includes rate limit information in HTTP response headers:
HTTP / 1.1 200 OK
X-RateLimit-Limit : 20
X-RateLimit-Remaining : 15
X-RateLimit-Reset : 1640995200000
X-RateLimit-Bucket : channel:message:create::123456789
Header Description X-RateLimit-LimitMaximum number of requests in the window X-RateLimit-RemainingRequests remaining in the current window X-RateLimit-ResetTimestamp (ms) when the limit resets X-RateLimit-BucketBucket identifier for this rate limit Retry-AfterSeconds until the rate limit resets (on 429)
Rate Limit Errors
When a rate limit is exceeded, the API returns:
{
"code" : "RATE_LIMITED" ,
"message" : "You are being rate limited" ,
"retry_after" : 3.5 ,
"global" : false
}
HTTP Status: 429 Too Many Requests
Error Fields
Human-readable error message
Seconds to wait before retrying
Whether this is a global rate limit (affects all endpoints)
Implementation
Service Architecture
import { RateLimitService } from '@fluxer/rate_limit' ;
import { InMemoryCacheService } from '@fluxer/rate_limit' ;
// Create cache service
const cacheService = new InMemoryCacheService ();
// Create rate limit service
const rateLimitService = new RateLimitService ( cacheService , {
globalWindowMs: 1000 // Global window: 1 second
});
Checking Rate Limits
// Check a rate limit
const result = await rateLimitService . checkLimit ({
identifier: 'user:123456' ,
maxAttempts: 20 ,
windowMs: 10000 // 10 seconds
});
if ( ! result . allowed ) {
console . log ( `Rate limited. Retry after ${ result . retryAfterMs } ms` );
}
Bucket-Based Limits
// Check bucket-specific limit
const result = await rateLimitService . checkBucketLimit (
'channel:message:create::123456' ,
{
limit: 20 ,
windowMs: 10000
}
);
Global Rate Limits
// Check global limit (applies across all endpoints)
const result = await rateLimitService . checkGlobalLimit (
'user:123456' ,
50 // 50 requests per second
);
Resetting Limits
// Reset rate limit for an identifier (admin only)
await rateLimitService . resetLimit ( 'user:123456' );
Rate Limit Configuration
Route Configuration
Define rate limits for API routes:
import { ms } from 'itty-time' ;
export const ChannelRateLimitConfigs = {
CHANNEL_MESSAGE_CREATE: {
bucket: 'channel:message:create::channel_id' ,
config: {
limit: 20 ,
windowMs: ms ( '10 seconds' )
}
},
CHANNEL_MESSAGE_DELETE: {
bucket: 'channel:message:delete::channel_id' ,
config: {
limit: 20 ,
windowMs: ms ( '10 seconds' )
}
}
} as const ;
Middleware
Apply rate limiting middleware:
import { rateLimitMiddleware } from '@fluxer/api' ;
import { ChannelRateLimitConfigs } from '@fluxer/api/rate_limit_configs' ;
app . post (
'/channels/:channelId/messages' ,
rateLimitMiddleware ( ChannelRateLimitConfigs . CHANNEL_MESSAGE_CREATE ),
async ( c ) => {
// Handle message creation
}
);
Best Practices
Respect Rate Limits Always check rate limit headers and implement exponential backoff when hitting limits.
Use Bucket Identifiers Include resource IDs in bucket names to isolate rate limits per channel/guild/user.
Cache Service Use a distributed cache (Redis) in production for rate limiting across multiple API instances.
Monitor Usage Track rate limit hits to identify potential abuse or legitimate high-traffic patterns.
Handling Rate Limits
Exponential Backoff
async function sendMessageWithRetry (
channelId : string ,
content : string ,
maxRetries = 3
) {
let retries = 0 ;
while ( retries < maxRetries ) {
try {
return await sendMessage ( channelId , content );
} catch ( error ) {
if ( error . code === 'RATE_LIMITED' ) {
const delay = error . retry_after * 1000 ;
await new Promise ( resolve => setTimeout ( resolve , delay ));
retries ++ ;
} else {
throw error ;
}
}
}
throw new Error ( 'Max retries exceeded' );
}
Queue-Based Approach
class RateLimitedQueue {
private queue : Array <() => Promise < void >> = [];
private processing = false ;
async add ( fn : () => Promise < void >) {
this . queue . push ( fn );
if ( ! this . processing ) {
await this . process ();
}
}
private async process () {
this . processing = true ;
while ( this . queue . length > 0 ) {
const fn = this . queue . shift () ! ;
try {
await fn ();
} catch ( error ) {
if ( error . code === 'RATE_LIMITED' ) {
// Re-queue and wait
this . queue . unshift ( fn );
await new Promise ( resolve =>
setTimeout ( resolve , error . retry_after * 1000 )
);
}
}
}
this . processing = false ;
}
}
Special Cases
Global Rate Limits
Global rate limits apply across all API endpoints for a user/IP:
// Global limit: 50 requests per second across all endpoints
const result = await rateLimitService . checkGlobalLimit (
'user:123456' ,
50
);
Slowmode
Channel-specific slowmode is a special rate limit:
{
"code" : "SLOWMODE_RATE_LIMITED" ,
"message" : "You are sending messages too quickly" ,
"retry_after" : 5.0
}
Users with BYPASS_SLOWMODE permission are exempt.
Configuration Options
interface RateLimitServiceOptions {
globalWindowMs ?: number ; // Global window duration (default: 1000ms)
getCurrentTimeMs ?: () => number ; // Custom time source
}
Cache Implementation
In-Memory Cache
import { InMemoryCacheService } from '@fluxer/rate_limit' ;
const cache = new InMemoryCacheService ();
In-memory cache is not recommended for production multi-instance deployments. Use Redis or another distributed cache.
Redis Cache (Production)
import { RedisCacheService } from '@fluxer/cache' ;
const cache = new RedisCacheService ({
host: 'localhost' ,
port: 6379
});