Skip to main content

Overview

The platform uses Redis as a message queue to manage concurrent deployments. This ensures builds are processed sequentially, prevents resource exhaustion, and provides reliable job processing with automatic retry capabilities.

Why Redis Queue?

Sequential Processing

Prevents multiple builds from overwhelming system resources

Reliable Delivery

Atomic operations ensure no deployment is lost

Blocking Operations

Efficient waiting without polling or resource waste

Simple Architecture

No need for complex message brokers like RabbitMQ or Kafka

Queue Architecture

┌─────────────────┐         ┌─────────┐         ┌──────────────────┐
│ Upload Service  │         │  Redis  │         │ Deploy Service   │
│                 │         │  Queue  │         │                  │
│  LPUSH(id) ────┼────────▶│         │◀────────┼──── BLPOP(id)   │
│                 │         │  FIFO   │         │                  │
└─────────────────┘         └─────────┘         └──────────────────┘
     Producer                  Queue                 Consumer
The queue follows a FIFO (First In, First Out) pattern, ensuring deployments are processed in submission order.

Producer: Upload Service

The upload service pushes deployment IDs onto the queue after successful S3 upload.

Implementation

File: upload-service/src/utils/buildQueue.ts
import { createClient } from 'redis';

const client = createClient()
client.on('error', (error) => {
  console.error('Redis error:', error)
})
client.connect()

export const buildQueue = async (id: string) => {
  try {
    await client.LPUSH(
      'build-queue',
      id
    )
  } catch (error) {
    console.error('Redis error:', error)
  }
}

Usage in Upload Flow

File: upload-service/src/server.ts:43
app.post('/get/url', async (req, res) => {
  const repoUrl = req.body.url
  const randomId = generateRandomId()
  
  try {
    await git.clone(repoUrl, clonePath)
    const files = getAllFiles(clonePath)
    
    // Upload all files to S3
    await Promise.all(uploadPromises);
    
    // Add to build queue
    buildQueue(randomId)
    
    res.json({ success: true, id: randomId })
  } catch (error) {
    res.status(500).json({ error: 'Failed to clone repository' })
  }
})
1

Queue Initialization

Redis client connects on service startup
const client = createClient()
client.connect()
2

Error Handling

Connection errors are logged but don’t crash the service
client.on('error', (error) => {
  console.error('Redis error:', error)
})
3

Queue Push

Deployment ID is pushed to the left of the queue
await client.LPUSH('build-queue', id)
Redis Queue State:
["xY9kL3mN2", "aB3xK9mP2q", "pQ7rS1tU4v"] → LPUSH "zW8vB5nC6"
["zW8vB5nC6", "xY9kL3mN2", "aB3xK9mP2q", "pQ7rS1tU4v"]

Consumer: Deploy Service

The deploy service continuously polls the queue using blocking pop operations.

Implementation

File: deploy-service/src/server.ts
import { commandOptions, createClient } from "redis";
import { downloadS3Folder } from "./utils/donwloadS3Folde";
import { buildProject, copyFinalDist } from "./utils/buildProject";

const client = await createClient()
  .on('error', err => console.log('Redis Client Error', err))
  .on('connect', () => console.log('Redis Client Connected'))
  .connect();

export async function getIdFromQueue() {
  while (1) {
    const response = await client.blPop(
      commandOptions({ isolated: true }),
      'build-queue',
      0
    )
    
    console.log('Response', response)
    if (response) {
      await downloadS3Folder(`output/${response.element}`)
      await buildProject(response.element)
      await copyFinalDist(response.element);
    }
  }
}

getIdFromQueue()

Key Components

while (1) {
  // Continuously process queue
}
Purpose: Keeps the service always listening for new deploymentsWhy infinite?
  • The service should never stop processing
  • blPop blocks until items are available
  • No CPU waste from polling

Queue Operations

LPUSH (Left Push)

Producer Operation
await client.LPUSH('build-queue', id)
Visualization:
Initial:  ["B", "A"]
LPUSH C:  ["C", "B", "A"]  ← New item on left
Characteristics:
  • Time Complexity: O(1)
  • Atomic: Guaranteed to succeed or fail completely
  • Return Value: New queue length

BLPOP (Blocking Left Pop)

Consumer Operation
const response = await client.blPop(
  commandOptions({ isolated: true }),
  'build-queue',
  0
)
Visualization:
Initial:  ["C", "B", "A"]
BLPOP:    ["C", "B"]       ← Removed from right (FIFO)
Returns:  { key: 'build-queue', element: 'A' }
Characteristics:
  • Time Complexity: O(1)
  • Blocking: Waits until queue has items
  • Timeout: 0 = wait forever, N = wait N seconds
  • Return Value: { key, element } or null on timeout
LPUSH adds to the left (head) of the list
BLPOP removes from the right (tail) of the list
Example Flow:
LPUSH "deploy-1"  →  ["deploy-1"]
LPUSH "deploy-2"  →  ["deploy-2", "deploy-1"]
LPUSH "deploy-3"  →  ["deploy-3", "deploy-2", "deploy-1"]

BLPOP             →  Returns "deploy-1" (oldest)
BLPOP             →  Returns "deploy-2" (second oldest)
BLPOP             →  Returns "deploy-3" (newest)
This ensures First In, First Out ordering.

Connection Management

Upload Service Connection

const client = createClient()
client.on('error', (error) => {
  console.error('Redis error:', error)
})
client.connect()
Connection Lifecycle:
  1. Client created (not connected)
  2. Error handler registered
  3. Explicit connection established
  4. Ready for LPUSH operations

Deploy Service Connection

const client = await createClient()
  .on('error', err => console.log('Redis Client Error', err))
  .on('connect', () => console.log('Redis Client Connected'))
  .connect();
Connection Lifecycle:
  1. Client created with chained event handlers
  2. Connection event logs successful connection
  3. await connect() waits for connection to establish
  4. Ready for BLPOP operations
The deploy service uses await connect() to ensure Redis is ready before starting the queue consumer loop.

Error Handling

Both Services:
client.on('error', (error) => {
  console.error('Redis error:', error)
})
Common Errors:
  • Redis server not running
  • Network connectivity issues
  • Authentication failures
  • Connection timeout
Behavior:
  • Errors are logged
  • Connection auto-retry (redis client default)
  • Service continues running

Concurrency Model

Single Consumer

Current Implementation:
while (1) {
  const response = await client.blPop(...)
  // Sequential processing
  await downloadS3Folder(...)
  await buildProject(...)
  await copyFinalDist(...)
}
Characteristics:
  • One deployment processed at a time
  • Predictable resource usage
  • Simple to reason about
  • May be slow under high load

Multiple Consumers (Scaling)

To scale, run multiple deploy service instances:
# Terminal 1
node deploy-service/src/server.ts

# Terminal 2
node deploy-service/src/server.ts

# Terminal 3
node deploy-service/src/server.ts
How It Works:
  • Each instance runs its own blPop loop
  • Redis guarantees each queue item goes to only one consumer
  • Automatic load balancing
  • Parallelism limited by CPU/memory
Example:
Queue: ["deploy-1", "deploy-2", "deploy-3", "deploy-4"]

Consumer 1: Gets "deploy-1" and "deploy-4"
Consumer 2: Gets "deploy-2"
Consumer 3: Gets "deploy-3"
Benefits:
  • Higher throughput
  • Better resource utilization
  • Fault tolerance (if one consumer crashes, others continue)
Trade-offs:
  • More complex deployment
  • Higher resource usage
  • Potential Docker build contention
  • S3 rate limits may apply
Optimal Setup:
  • Start with 1 consumer
  • Monitor queue length and processing time
  • Add consumers when queue consistently backs up
  • Don’t exceed CPU core count for CPU-bound builds

Monitoring Queue Health

Check Queue Length

redis-cli LLEN build-queue
Interpretation:
  • 0: No pending deployments (healthy)
  • 1-5: Normal operation
  • 10+: Queue backing up, consider scaling
  • 100+: System overloaded, investigate

View Queue Contents

# View all items (non-destructive)
redis-cli LRANGE build-queue 0 -1
Output:
1) "xY9kL3mN2"
2) "aB3xK9mP2q"
3) "pQ7rS1tU4v"

Monitor Processing Rate

# Watch queue length in real-time
watch -n 1 'redis-cli LLEN build-queue'

Queue vs. Direct Processing

Without Queue:
app.post('/get/url', async (req, res) => {
  await git.clone(...)
  await uploadToS3(...)
  await buildProject(...)  // ❌ Blocks response
  res.json({ success: true })
})
Problems:
  • User waits 30-60 seconds for response
  • Multiple concurrent requests spawn multiple builds
  • Server can run out of memory/CPU
  • No way to prioritize or delay builds
With Queue:
app.post('/get/url', async (req, res) => {
  await git.clone(...)
  await uploadToS3(...)
  buildQueue(id)  // ✓ Immediate response
  res.json({ success: true, id })
})
Benefits:
  • User gets instant feedback
  • Builds processed at controlled rate
  • System resources protected
  • Can monitor/retry failed builds

Key Implementation Files

FileLinesPurpose
upload-service/src/utils/buildQueue.ts1-18Queue producer (LPUSH)
deploy-service/src/server.ts1-29Queue consumer (BLPOP)
upload-service/src/server.ts43Queue integration in upload flow

Next Steps

Deployment Process

See how queue fits into overall deployment flow

Build System

Learn what happens after queue pop

Build docs developers (and LLMs) love