Overview
The platform uses Redis as a message queue to manage concurrent deployments. This ensures builds are processed sequentially, prevents resource exhaustion, and provides reliable job processing with automatic retry capabilities.Why Redis Queue?
Sequential Processing
Prevents multiple builds from overwhelming system resources
Reliable Delivery
Atomic operations ensure no deployment is lost
Blocking Operations
Efficient waiting without polling or resource waste
Simple Architecture
No need for complex message brokers like RabbitMQ or Kafka
Queue Architecture
The queue follows a FIFO (First In, First Out) pattern, ensuring deployments are processed in submission order.
Producer: Upload Service
The upload service pushes deployment IDs onto the queue after successful S3 upload.Implementation
File:upload-service/src/utils/buildQueue.ts
Usage in Upload Flow
File:upload-service/src/server.ts:43
Consumer: Deploy Service
The deploy service continuously polls the queue using blocking pop operations.Implementation
File:deploy-service/src/server.ts
Key Components
- Infinite Loop
- Blocking Pop (BLPOP)
- Isolated Connection
- Processing
- The service should never stop processing
blPopblocks until items are available- No CPU waste from polling
Queue Operations
LPUSH (Left Push)
Producer Operation- Time Complexity: O(1)
- Atomic: Guaranteed to succeed or fail completely
- Return Value: New queue length
BLPOP (Blocking Left Pop)
Consumer Operation- Time Complexity: O(1)
- Blocking: Waits until queue has items
- Timeout: 0 = wait forever, N = wait N seconds
- Return Value:
{ key, element }or null on timeout
Why LPUSH + BLPOP = FIFO?
Why LPUSH + BLPOP = FIFO?
LPUSH adds to the left (head) of the list
BLPOP removes from the right (tail) of the listExample Flow:This ensures First In, First Out ordering.
BLPOP removes from the right (tail) of the listExample Flow:
Connection Management
Upload Service Connection
- Client created (not connected)
- Error handler registered
- Explicit connection established
- Ready for LPUSH operations
Deploy Service Connection
- Client created with chained event handlers
- Connection event logs successful connection
await connect()waits for connection to establish- Ready for BLPOP operations
The deploy service uses
await connect() to ensure Redis is ready before starting the queue consumer loop.Error Handling
- Connection Errors
- Queue Operation Errors
- Processing Errors
Both Services:Common Errors:
- Redis server not running
- Network connectivity issues
- Authentication failures
- Connection timeout
- Errors are logged
- Connection auto-retry (redis client default)
- Service continues running
Concurrency Model
Single Consumer
Current Implementation:- One deployment processed at a time
- Predictable resource usage
- Simple to reason about
- May be slow under high load
Multiple Consumers (Scaling)
To scale, run multiple deploy service instances:- Each instance runs its own
blPoploop - Redis guarantees each queue item goes to only one consumer
- Automatic load balancing
- Parallelism limited by CPU/memory
Scaling Considerations
Scaling Considerations
Benefits:
- Higher throughput
- Better resource utilization
- Fault tolerance (if one consumer crashes, others continue)
- More complex deployment
- Higher resource usage
- Potential Docker build contention
- S3 rate limits may apply
- Start with 1 consumer
- Monitor queue length and processing time
- Add consumers when queue consistently backs up
- Don’t exceed CPU core count for CPU-bound builds
Monitoring Queue Health
Check Queue Length
0: No pending deployments (healthy)1-5: Normal operation10+: Queue backing up, consider scaling100+: System overloaded, investigate
View Queue Contents
Monitor Processing Rate
Queue vs. Direct Processing
Why Not Process Immediately?
Why Not Process Immediately?
Without Queue:Problems:Benefits:
- User waits 30-60 seconds for response
- Multiple concurrent requests spawn multiple builds
- Server can run out of memory/CPU
- No way to prioritize or delay builds
- User gets instant feedback
- Builds processed at controlled rate
- System resources protected
- Can monitor/retry failed builds
Key Implementation Files
| File | Lines | Purpose |
|---|---|---|
upload-service/src/utils/buildQueue.ts | 1-18 | Queue producer (LPUSH) |
deploy-service/src/server.ts | 1-29 | Queue consumer (BLPOP) |
upload-service/src/server.ts | 43 | Queue integration in upload flow |
Next Steps
Deployment Process
See how queue fits into overall deployment flow
Build System
Learn what happens after queue pop