By default, BullMQ keeps completed and failed jobs in Redis indefinitely. Auto-removal strategies help manage Redis memory by automatically cleaning up finalized jobs.
Why Auto-removal?
Without auto-removal:
Memory bloat : Completed/failed jobs accumulate in Redis
Performance degradation : Large sets slow down Redis operations
Cost increase : More Redis memory required
With auto-removal:
Controlled memory usage : Keep only recent or important jobs
Better performance : Smaller Redis sets
Cost efficiency : Optimal Redis sizing
Configuration Options
Configure auto-removal on the Worker options:
import { Worker } from 'bullmq' ;
const worker = new Worker (
'queueName' ,
async ( job ) => {
return await processJob ( job );
},
{
connection: {
host: 'localhost' ,
port: 6379 ,
},
removeOnComplete: { count: 1000 },
removeOnFail: { count: 5000 },
},
);
removeOnComplete
Control how completed jobs are removed:
import { WorkerOptions } from 'bullmq' ;
const options : WorkerOptions = {
removeOnComplete: {
count: 1000 , // Keep last 1000 completed jobs
},
};
removeOnFail
Control how failed jobs are removed:
const options : WorkerOptions = {
removeOnFail: {
count: 5000 , // Keep last 5000 failed jobs
},
};
A common practice is to keep fewer completed jobs and more failed jobs for debugging purposes.
Removal Strategies
Remove All Jobs
Remove jobs immediately after completion:
const worker = new Worker (
'queueName' ,
processorFunction ,
{
connection ,
removeOnComplete: { count: 0 }, // Remove all completed jobs
removeOnFail: { count: 0 }, // Remove all failed jobs
},
);
Jobs are removed regardless of their names. All completed/failed jobs will be deleted.
Use case : High-volume queues where job results aren’t needed after processing.
Keep a Fixed Number of Jobs
Keep the most recent N jobs:
const worker = new Worker (
'queueName' ,
processorFunction ,
{
connection ,
removeOnComplete: { count: 1000 },
removeOnFail: { count: 5000 },
},
);
How it works:
When job #1001 completes, job #1 is removed
Keeps a rolling window of recent jobs
Maintains a fixed memory footprint
Keep Jobs by Age
Keep jobs up to a certain age (in seconds):
const worker = new Worker (
'queueName' ,
processorFunction ,
{
connection ,
removeOnComplete: {
age: 3600 , // Keep jobs up to 1 hour old
},
removeOnFail: {
age: 24 * 3600 , // Keep jobs up to 24 hours old
},
},
);
Time units:
3600 = 1 hour
24 * 3600 = 24 hours
7 * 24 * 3600 = 7 days
Combine Age and Count
Use both strategies for more control:
const worker = new Worker (
'queueName' ,
processorFunction ,
{
connection ,
removeOnComplete: {
age: 3600 , // Keep up to 1 hour old
count: 1000 , // But no more than 1000 jobs
},
removeOnFail: {
age: 24 * 3600 , // Keep up to 24 hours old
count: 5000 , // But no more than 5000 jobs
},
},
);
Behavior : Keeps jobs that satisfy BOTH conditions.
Limit the number of jobs removed per cleanup iteration:
const worker = new Worker (
'queueName' ,
processorFunction ,
{
connection ,
removeOnComplete: {
age: 3600 ,
count: 1000 ,
limit: 100 , // Remove max 100 jobs per cleanup
},
removeOnFail: {
age: 24 * 3600 ,
limit: 50 , // Remove max 50 jobs per cleanup
},
},
);
Use case : Prevent cleanup operations from blocking Redis for too long.
KeepJobs Type Reference
The KeepJobs type supports two formats:
type KeepJobs =
| {
/**
* Maximum count of jobs to be kept.
*/
count : number ;
}
| {
/**
* Maximum age in seconds for job to be kept.
*/
age : number ;
/**
* Maximum count of jobs to be kept.
*/
count ?: number ;
/**
* Maximum quantity of jobs to be removed per cleanup iteration.
*/
limit ?: number ;
};
Auto-removal Behavior
Lazy Cleanup
Auto-removal works lazily . Jobs are only removed when a new job completes or fails, triggering the cleanup process.
// Cleanup happens here ⬇
worker . on ( 'completed' , ( job ) => {
// After this job completes, old jobs are removed
});
// And here ⬇
worker . on ( 'failed' , ( job ) => {
// After this job fails, old jobs are removed
});
Implications:
If no jobs are processing, no cleanup happens
Cleanup is distributed across job completions
No dedicated cleanup background process
Per-job Cleanup
Each job completion/failure triggers a cleanup:
const worker = new Worker ( 'queue' , processor , {
removeOnComplete: { count: 100 , limit: 10 },
});
// Job 101 completes
// ➡ Cleanup runs: removes up to 10 old jobs
// Job 102 completes
// ➡ Cleanup runs again: removes up to 10 more old jobs
Production Examples
High-Volume Queue
Keep minimal history:
const worker = new Worker (
'highVolumeQueue' ,
processorFunction ,
{
connection ,
removeOnComplete: {
count: 100 , // Keep only last 100 completed
limit: 50 , // Remove 50 at a time
},
removeOnFail: {
count: 500 , // Keep more failures for debugging
limit: 25 ,
},
},
);
Audit Trail Queue
Keep jobs longer for compliance:
const worker = new Worker (
'auditQueue' ,
processorFunction ,
{
connection ,
removeOnComplete: {
age: 30 * 24 * 3600 , // 30 days
count: 50000 , // Up to 50k jobs
limit: 100 ,
},
removeOnFail: {
age: 90 * 24 * 3600 , // 90 days for failures
count: 10000 ,
limit: 50 ,
},
},
);
Development Queue
Keep everything for debugging:
const worker = new Worker (
'devQueue' ,
processorFunction ,
{
connection ,
// Don't set removeOnComplete/removeOnFail
// Jobs are kept indefinitely
},
);
Critical Queue
Balance debugging and memory:
const worker = new Worker (
'criticalQueue' ,
processorFunction ,
{
connection ,
removeOnComplete: {
age: 7 * 24 * 3600 , // 7 days
count: 10000 ,
limit: 100 ,
},
removeOnFail: {
// Keep ALL failed jobs (no removal)
// Manual cleanup required
},
},
);
Monitoring Auto-removal
Track job counts to verify cleanup:
import { Queue } from 'bullmq' ;
const queue = new Queue ( 'queueName' );
setInterval ( async () => {
const completedCount = await queue . getCompletedCount ();
const failedCount = await queue . getFailedCount ();
console . log ( `Completed: ${ completedCount } , Failed: ${ failedCount } ` );
// Alert if counts exceed thresholds
if ( completedCount > 2000 || failedCount > 10000 ) {
console . warn ( 'Job counts exceed expected limits!' );
}
}, 60000 ); // Check every minute
Manual Cleanup
For one-time or scheduled cleanup:
import { Queue } from 'bullmq' ;
const queue = new Queue ( 'queueName' );
// Remove completed jobs older than 1 hour
await queue . clean ( 3600 * 1000 , 100 , 'completed' );
// Remove failed jobs older than 24 hours
await queue . clean ( 24 * 3600 * 1000 , 100 , 'failed' );
// Remove all completed jobs
await queue . clean ( 0 , 0 , 'completed' );
Best Practices
Keep fewer completed jobs
Completed jobs are usually less interesting than failures: removeOnComplete : { count : 1000 },
removeOnFail : { count : 5000 },
Use age-based removal for compliance
When regulatory requirements dictate retention periods: removeOnComplete : { age : 30 * 24 * 3600 }, // 30 days
Set cleanup limits for high-volume queues
Prevent cleanup from blocking Redis: removeOnComplete : { count : 100 , limit : 50 },
Monitor job counts
Alert when counts exceed expected thresholds.
Test in staging first
Verify removal settings don’t delete needed jobs.
Consider external archiving
For long-term storage, export jobs before removal: worker . on ( 'completed' , async ( job , result ) => {
await archiveToDatabase ( job , result );
});
Troubleshooting
Jobs Not Being Removed
Cause : No jobs completing/failing to trigger cleanup.
Solution : Ensure workers are actively processing:
// Check if worker is running
if ( ! worker . isRunning ()) {
console . log ( 'Worker is not running - no cleanup will occur' );
}
Too Many Jobs Accumulating
Cause : Cleanup settings too lenient or no cleanup configured.
Solution : Adjust removal settings:
// Before: Too lenient
removeOnComplete : { count : 100000 },
// After: More aggressive
removeOnComplete : { count : 1000 , limit : 100 },
Redis Memory Still Growing
Cause : Jobs accumulating in other states (delayed, waiting, active).
Solution : Monitor and clean other job states:
const counts = await queue . getJobCounts ( 'waiting' , 'delayed' , 'active' );
console . log ( 'Other job states:' , counts );
// Clean stuck jobs if needed
await queue . clean ( 3600 * 1000 , 100 , 'active' ); // Stalled jobs
Worker Options Configure worker behavior
Queue Management Manage jobs in queues
Job Lifecycle Understand job states
Redis Optimization Optimize Redis usage
API Reference