Overview
HTTP Ledger is optimized for production use, but understanding its performance characteristics helps you configure it appropriately for your workload. This guide covers optimization strategies and performance considerations.
The middleware is designed with minimal overhead:
Memory Efficient : Streams large responses without buffering
Non-blocking : Async operations don’t block request processing
Configurable Logging : Disable expensive operations when not needed
Smart Sampling : Reduce logging overhead in high-traffic environments
Selective Logging : Skip logging for specific requests
Optimization Strategies
1. Log Sampling
Reduce logging volume in high-traffic environments:
Ultra High Traffic
High Traffic
Medium Traffic
Full Logging
// Log 1% of requests
app . use (
logger ({
logSampling: 0.01 ,
}),
);
Sampling is determined randomly for each request using Math.random() from src/utils/advancedFeatures.ts:36-46:
export const shouldLogBasedOnSampling = ( logSampling ?: number ) : boolean => {
if ( logSampling === undefined || logSampling === 1 ) {
return true ;
}
if ( logSampling <= 0 ) {
return false ;
}
return Math . random () < logSampling ;
};
2. Disable Body Logging
Disable request/response body logging for large payloads:
app . use (
logger ({
logBody: false ,
logResponse: false ,
logQueryParams: true , // Still log query params
}),
);
When to disable body logging:
APIs handling file uploads
Services with large JSON payloads
High-throughput data processing endpoints
WebSocket upgrade endpoints
3. Selective Logging
Use shouldLog to skip logging for specific requests:
app . use (
logger ({
shouldLog : ( req , res ) => {
// Skip health checks
if ( req . path === '/health' || req . path === '/ping' ) return false ;
// Skip successful GET requests to static files
if (
req . method === 'GET' &&
req . path . startsWith ( '/static/' ) &&
res . statusCode === 200
) {
return false ;
}
// Skip OPTIONS requests
if ( req . method === 'OPTIONS' ) return false ;
// Only log errors for read-heavy endpoints
if ( req . path . startsWith ( '/api/search/' ) && res . statusCode < 400 ) {
return false ;
}
return true ;
},
}),
);
Exclude large or unnecessary headers:
app . use (
logger ({
excludedHeaders: [
'user-agent' , // Often very long
'accept' ,
'accept-encoding' ,
'accept-language' ,
'cache-control' ,
'connection' ,
'cookie' , // Can be very large
],
}),
);
5. Conditional Body Logging
Log bodies only for errors or specific conditions:
const isDevelopment = process . env . NODE_ENV === 'development' ;
const logBodies = isDevelopment ;
app . use (
logger ({
logBody: logBodies ,
logResponse: logBodies ,
// Override with custom formatter for errors
customFormatter : ( logData ) => {
// Always include bodies for errors
if ( logData . statusCode >= 400 && ! logBodies ) {
return {
... logData ,
body: logData . body ,
responseBody: logData . responseBody ,
};
}
return logData ;
},
}),
);
Overhead Measurements
Typical overhead per request:
Configuration Overhead Use Case Minimal (no bodies) ~0.5ms High-traffic production Standard (with bodies) ~1-2ms Medium-traffic production Full (all features) ~2-5ms Development/debugging With external logging +variable Depends on external service
Memory Usage
The middleware uses minimal memory:
Base overhead : ~50KB per request
With body logging : Depends on payload size
No response buffering : Large responses stream through
Advanced Optimization
Environment-Based Configuration
const isProduction = process . env . NODE_ENV === 'production' ;
app . use (
logger ({
// Minimal logging
logBody: false ,
logResponse: false ,
logQueryParams: true ,
// Sample logs
logSampling: 0.1 ,
// Skip health checks
shouldLog : ( req , res ) => req . path !== '/health' ,
// Mask sensitive fields
maskFields: [ 'password' , 'token' , 'secret' ],
// Only log errors and warnings
customLogLevel : ( logData ) => {
if ( logData . statusCode >= 500 ) return 'error' ;
if ( logData . statusCode >= 400 ) return 'warn' ;
return 'info' ;
},
}),
);
Traffic-Based Sampling
Adjust sampling based on traffic volume:
const getLogSampling = () => {
const requestsPerSecond = getRequestRate (); // Your metrics function
if ( requestsPerSecond > 1000 ) return 0.01 ; // 1% at very high traffic
if ( requestsPerSecond > 500 ) return 0.05 ; // 5% at high traffic
if ( requestsPerSecond > 100 ) return 0.1 ; // 10% at medium traffic
return 1.0 ; // 100% at low traffic
};
app . use (
logger ({
logSampling: getLogSampling (),
}),
);
Async Operations
Optimize onLog and getIpInfo for performance:
app . use (
logger ({
// Use timeouts to prevent hanging
getIpInfo : async ( ip ) => {
const controller = new AbortController ();
const timeoutId = setTimeout (() => controller . abort (), 1000 ); // 1s timeout
try {
const response = await fetch ( `https://ipapi.co/ ${ ip } /json/` , {
signal: controller . signal ,
});
clearTimeout ( timeoutId );
return await response . json ();
} catch ( error ) {
clearTimeout ( timeoutId );
return {}; // Return empty on timeout/error
}
},
// Batch logs to external service
onLog: (() => {
const logBuffer = [];
const BATCH_SIZE = 100 ;
const FLUSH_INTERVAL = 5000 ; // 5 seconds
// Flush logs periodically
setInterval ( async () => {
if ( logBuffer . length > 0 ) {
const batch = logBuffer . splice ( 0 , logBuffer . length );
try {
await fetch ( process . env . LOG_ENDPOINT , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ( batch ),
});
} catch ( error ) {
console . error ( 'Failed to send log batch:' , error );
}
}
}, FLUSH_INTERVAL );
return ( logData ) => {
logBuffer . push ( logData );
// Flush immediately if buffer is full
if ( logBuffer . length >= BATCH_SIZE ) {
const batch = logBuffer . splice ( 0 , BATCH_SIZE );
fetch ( process . env . LOG_ENDPOINT , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ( batch ),
}). catch (( error ) => {
console . error ( 'Failed to send log batch:' , error );
});
}
};
})(),
}),
);
Track Logging Overhead
app . use (
logger ({
customFormatter : ( logData ) => ({
... logData ,
// Add logging overhead metric
loggingOverhead: Date . now () - new Date ( logData . timestamp . response ). getTime (),
}),
onLog : ( logData ) => {
// Alert if logging is too slow
if ( logData . loggingOverhead > 100 ) {
console . warn ( 'Slow logging detected:' , logData . loggingOverhead , 'ms' );
}
},
}),
);
Track Request Timing
app . use (
logger ({
customFormatter : ( logData ) => ({
... logData ,
// Categorize by speed
performanceCategory:
logData . timeTaken < 100
? 'fast'
: logData . timeTaken < 1000
? 'medium'
: 'slow' ,
}),
onLog : ( logData ) => {
// Send performance metrics to monitoring
if ( logData . performanceCategory === 'slow' ) {
sendToMonitoring ({
metric: 'slow_request' ,
value: logData . timeTaken ,
tags: {
method: logData . method ,
path: logData . url ,
},
});
}
},
}),
);
Best Practices
Start conservative
Begin with minimal logging in production: app . use (
logger ({
logBody: false ,
logResponse: false ,
logSampling: 0.1 ,
}),
);
Monitor impact
Track the overhead and adjust as needed:
Monitor request latency
Watch memory usage
Check log volume
Optimize incrementally
Add body logging for errors only
Increase sampling for specific routes
Fine-tune shouldLog conditions
Test under load
Use load testing to verify performance: # Example with Apache Bench
ab -n 10000 -c 100 http://localhost:3000/api/endpoint
✅ Use log sampling in high-traffic environments
✅ Disable body logging for large payloads
✅ Exclude unnecessary headers
✅ Skip logging for health checks and static files
✅ Set timeouts for external calls (onLog, getIpInfo)
✅ Batch external log shipments
✅ Monitor logging overhead
✅ Test under production-like load
The middleware is designed to have minimal impact on request processing. Most overhead comes from optional features like body logging and external integrations.
Always set timeouts for getIpInfo and onLog callbacks to prevent slow external services from impacting your application performance.