Skip to main content

Overview

The build system uses Docker containers to provide isolated, reproducible builds for every deployment. Each project is built in its own ephemeral container, ensuring build consistency and preventing dependency conflicts.

Architecture

S3 Source Files → Download → Generate Dockerfile → Docker Build → 
Extract Output → Upload to S3 → Cleanup
All builds run in isolated Docker containers with Node.js 20 Alpine Linux, providing a lightweight and consistent build environment.

Build Process

1

Download Source Files

The deploy service downloads the project source from S3Implementation: deploy-service/src/utils/donwloadS3Folde.ts:13-64
export async function downloadS3Folder(S3Path: string) {
  const allFiles = await s3.listObjectsV2({
    Bucket: "vercel-frontend",
    Prefix: S3Path
  }).promise()
  
  const allPromise = allFiles.Contents?.map(async ({ Key }) => {
    const finalOutput = path.join(distPath, Key)
    const dirName = path.dirname(finalOutput)
    
    if (!fs.existsSync(dirName)) {
      fs.mkdirSync(dirName, { recursive: true })
    }
    
    const outputFile = fs.createWriteStream(finalOutput)
    await s3.getObject({
      Bucket: "vercel-frontend",
      Key
    }).createReadStream().pipe(outputFile)
  })
  
  await Promise.all(allPromise)
}
The function:
  • Lists all objects with the deployment ID prefix
  • Creates necessary directory structure locally
  • Downloads files in parallel using streams
  • Recreates the original project structure
2

Generate Dockerfile

A Dockerfile is dynamically created for each buildImplementation: deploy-service/src/utils/buildProject.ts:10-19
const dockerFilePath = path.join(projectPath, 'Dockerfile')
fs.writeFileSync(dockerFilePath, `
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
`)
  • Base Image: node:20-alpine - Lightweight Node.js 20 runtime (~170MB)
  • Stage Name: builder - Multi-stage build preparation
  • Working Directory: /app - Standard application directory
  • Dependency Installation: Copies package files first for layer caching
  • Source Copy: Copies entire project after dependencies
  • Build Command: Runs npm run build from package.json
3

Execute Docker Build

The system runs a series of Docker commands to build and extract outputImplementation: deploy-service/src/utils/buildProject.ts:21-55
const buildCommand = `
cd ${path.join(projectPath)} && 
docker build -t frontend-build-${lowerCaseId} . && 
docker create --name build-container-${lowerCaseId} frontend-build-${lowerCaseId} && 
(docker cp build-container-${lowerCaseId}:/app/build ${projectPath}/build || 
docker cp build-container-${lowerCaseId}:/app/dist ${projectPath}/build) && 
docker rm build-container-${lowerCaseId} && 
docker rmi frontend-build-${lowerCaseId} && 
rm -rf Dockerfile
`.replace(/\n/g, ' ');

exec(buildCommand, (error, stdout, stderr) => {
  // Handle build completion
});
4

Extract Build Output

The built files are copied from the container to the host filesystemThe system tries two common output directories:
  1. /app/build (Create React App, Next.js)
  2. /app/dist (Vite, Vue, general bundlers)
The || operator ensures fallback: if /app/build doesn’t exist, it tries /app/dist instead.
5

Upload Built Assets

Final build output is uploaded to S3 for hostingImplementation: deploy-service/src/utils/buildProject.ts:73-81
export function copyFinalDist(id: string) {
  const projectPath = path.join(process.cwd(), 'dist', `output/${id}/build`);
  const allFiles = getAllFiles(projectPath);
  
  allFiles.forEach(fullFilePath => {
    uploadFiles(
      `dist/${id}/` + fullFilePath.slice(projectPath.length + 1),
      fullFilePath
    );
  })
}
S3 Structure:
dist/aB3xK9mP2q/index.html
dist/aB3xK9mP2q/assets/main.js
dist/aB3xK9mP2q/assets/style.css
6

Cleanup

Docker resources are automatically cleaned upThe build command includes:
  • docker rm build-container-${lowerCaseId} - Remove container
  • docker rmi frontend-build-${lowerCaseId} - Remove image
  • rm -rf Dockerfile - Remove generated Dockerfile

Docker Command Breakdown

Let’s examine each Docker command in detail:
docker build -t frontend-build-${lowerCaseId} .
Purpose: Builds the Docker image from the generated Dockerfile
  • -t frontend-build-${lowerCaseId}: Tags image with deployment ID
  • .: Uses current directory as build context
  • Example tag: frontend-build-ab3xk9mp2q
Output: Docker image containing built application in /app/build or /app/dist

Build Output Handling

The system intelligently handles different build tools and their output directories:

/app/build

Used by:
  • Create React App
  • Next.js (next build && next export)
  • Gatsby
  • Some custom setups

/app/dist

Used by:
  • Vite
  • Vue CLI
  • Parcel
  • Most bundlers
// Fallback mechanism in build command
(docker cp build-container-${lowerCaseId}:/app/build ${projectPath}/build || 
 docker cp build-container-${lowerCaseId}:/app/dist ${projectPath}/build)

Build Monitoring

The build process includes real-time output streaming: Implementation: deploy-service/src/utils/buildProject.ts:48-54
child.stdout?.on('data', function (data) {
  console.log('stdout: ' + data);
});

child.stderr?.on('data', function (data) {
  console.log('stderr: ' + data);
});
Captured Output:
  • Docker build steps and layer caching
  • npm install progress and warnings
  • Build command output and errors
  • File sizes and optimization stats

Error Handling

1. Docker Build Failures
if (error) {
  console.error(`Execution error: ${error}`);
  reject(error);
  return;
}
Common causes:
  • Missing package.json
  • Invalid dependencies
  • Build script errors
  • Out of memory
2. Missing Build Output
  • If neither /app/build nor /app/dist exists, docker cp fails
  • Usually indicates build script didn’t run or failed silently
3. Docker Daemon Issues
  • Docker not running
  • Insufficient permissions
  • Disk space exhausted
4. S3 Upload Failures
catch (error) {
  console.error('Upload error:', error)
}
Occurs during final asset upload to S3.

Performance Optimizations

1

Layer Caching

Dependencies are copied before source code:
COPY package*.json ./
RUN npm install
COPY . .
This allows Docker to cache the dependency layer when only source code changes.
2

Parallel File Operations

Both S3 downloads and uploads use Promise.all() for parallelization:
await Promise.all(allPromise?.filter(x => x !== undefined));
3

Alpine Linux

Using node:20-alpine reduces image size from ~900MB to ~170MB, speeding up builds.
4

Immediate Cleanup

Resources are cleaned up immediately after use, preventing disk space issues.

Build Environment Variables

AWS Configuration:
const s3 = new S3({
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
  region: process.env.AWS_REGION
})
Required Environment Variables:
  • AWS_ACCESS_KEY_ID: AWS access credentials
  • AWS_SECRET_ACCESS_KEY: AWS secret credentials
  • AWS_REGION: S3 bucket region

Key Implementation Files

FileLinesPurpose
deploy-service/src/utils/buildProject.ts1-81Core build logic, Docker orchestration
deploy-service/src/utils/donwloadS3Folde.ts13-64S3 source download
deploy-service/src/server.ts11-27Build queue consumer

Next Steps

Deployment Process

Understand the full deployment pipeline

Queue Management

Learn how builds are queued and processed

Build docs developers (and LLMs) love