Skip to main content

Overview

This guide will walk you through setting up the Vercel For Frontend platform locally and deploying your first application. By the end, you’ll have both services running and a live deployment accessible via S3.

Prerequisites

Before you begin, ensure you have the following installed and configured:

Bun Runtime

Install from bun.sh - Required for running both services

Docker

Install Docker Desktop or Docker Engine - Required for building projects

Redis

Install Redis locally or use a cloud instance - Required for queue management

AWS Account

Active AWS account with S3 access - Required for storage and hosting
This platform was built using Bun v1.1.26. While newer versions should work, this is the tested version.

Installation

1

Clone the Repository

First, clone the platform repository to your local machine:
git clone <your-repository-url>
cd vercel-for-frontend
2

Install Dependencies

Install dependencies for all services. The platform consists of two separate services that need to be installed independently:
cd upload-service
bun install
Both services use the same core dependencies:
  • express - HTTP server
  • aws-sdk - S3 operations
  • redis - Queue management
  • simple-git - Git operations (upload-service only)
  • ignore - Gitignore parsing (upload-service only)
3

Create S3 Bucket

Create an S3 bucket named vercel-frontend in your AWS account:
aws s3 mb s3://vercel-frontend --region us-east-1
Enable static website hosting on the bucket:
aws s3 website s3://vercel-frontend --index-document index.html
The bucket name vercel-frontend is hardcoded in the source:
  • upload-service/src/utils/uploadFiles.ts:20
  • deploy-service/src/utils/donwloadS3Folde.ts:16
If you want to use a different bucket name, you’ll need to update these files.
4

Configure AWS Credentials

Both services require AWS credentials to access S3. Create IAM credentials with S3 full access and configure them.Required IAM Permissions:
  • s3:PutObject - Upload files
  • s3:GetObject - Download files
  • s3:ListBucket - List bucket contents
Get your AWS credentials:
  1. Go to AWS IAM Console
  2. Create a new user or use existing
  3. Attach AmazonS3FullAccess policy (or create a custom policy with the permissions above)
  4. Generate access keys
5

Set Environment Variables

Configure environment variables for both services. The AWS credentials are read from environment variables as seen in the source code:
// Both services use these environment variables
// upload-service/src/utils/uploadFiles.ts:4-7
export const s3 = new S3({
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
    region: process.env.AWS_REGION
})
For Upload Service:
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_REGION="us-east-1"
For Deploy Service:
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_REGION="us-east-1"
You can also create .env files in each service directory, but you’ll need to load them manually as the services don’t include dotenv by default.
6

Start Redis

Ensure Redis is running on the default port (6379). The services connect to Redis without authentication by default:
redis-server
Verify Redis is running:
redis-cli ping
# Should return: PONG
The deploy service connects to Redis and listens for jobs on the build-queue:
// deploy-service/src/server.ts:5
const client = await createClient()
    .on('error', err => console.log('Redis Client Error', err))
    .on('connect', () => console.log('Redis Client Connected'))
    .connect();

Running the Services

Both services need to run simultaneously for the platform to function:
cd upload-service
bun run src/server.ts
Expected output:
Upload Service
Server is running on port 3000
Deploy Service
Redis Client Connected
The upload service runs on port 3000 by default (upload-service/src/server.ts:52). The deploy service doesn’t expose an HTTP port as it only processes queue jobs.

Deploy Your First Application

Now that both services are running, let’s deploy a frontend application:
1

Prepare a Repository

You’ll need a GitHub repository with a frontend project. The project must have:
  • A package.json file
  • A build script defined (npm run build)
  • Build output to either /build or /dist directory
Example frameworks that work:
  • Create React App (outputs to /build)
  • Vite (outputs to /dist)
  • Next.js static export
  • Vue CLI
2

Submit Deployment Request

Send a POST request to the upload service with your repository URL:
curl -X POST http://localhost:3000/get/url \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://github.com/username/your-frontend-repo.git"
  }'
Expected response:
{
  "success": true,
  "id": "aB3xY9kL2m"
}
The id field is your deployment identifier. Save this for accessing your deployed application.
3

Monitor the Build

Watch the terminal outputs to monitor your deployment:Upload Service logs:
reached1
File uploaded successfully
Deploy Service logs:
Response { key: 'build-queue', element: 'aB3xY9kL2m' }
stdout: Successfully tagged frontend-build-ab3xy9kl2m:latest
stdout: Successfully copied to /app/build
Project uploaded successfully
The build process creates a Docker container, installs dependencies, runs the build, and uploads the output to S3.
4

Access Your Deployment

Once the build completes, your application is available at:
http://vercel-frontend.s3-website-us-east-1.amazonaws.com/dist/{your-id}/index.html
Replace {your-id} with the deployment ID from step 2.
The exact URL format depends on your S3 bucket region and configuration. Check your S3 bucket’s static website hosting settings for the base URL.

Understanding the Deployment Flow

Here’s what happens behind the scenes when you deploy:
1

Repository Cloning

The upload service clones your repository to a local directory:
// upload-service/src/server.ts:26
await git.clone(repoUrl, clonePath)
const files = getAllFiles(clonePath)
The getAllFiles function respects your .gitignore file, ensuring node_modules and other ignored files aren’t uploaded:
// upload-service/src/utils/getAllFiles.ts:7
const gitignorePath = path.join(folderPath, '.gitignore');
let ig = ignore();
if (fs.existsSync(gitignorePath)) {
    const gitignoreContent = fs.readFileSync(gitignorePath, 'utf8');
    ig = ig.add(gitignoreContent);
}
2

S3 Upload

All source files are uploaded to S3 under the output/{id} prefix:
// upload-service/src/server.ts:32
const uploadPromises = files.map(async (localFilePath) => {
    const relativePath = path.relative(clonePath, localFilePath)
    const s3Key = path.posix.join('output', randomId, relativePath)
    return uploadFiles(s3Key, localFilePath)
});
await Promise.all(uploadPromises);
3

Build Queue

The deployment ID is pushed to Redis for the deploy service to process:
// upload-service/src/utils/buildQueue.ts:9
export const buildQueue = async (id: string) => {
    await client.LPUSH('build-queue', id)
}
4

Docker Build

The deploy service creates a Dockerfile and builds your project in an isolated container:
// deploy-service/src/utils/buildProject.ts:12
fs.writeFileSync(dockerFilePath, `
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
`)
The build command handles both common output directories:
docker cp build-container-{id}:/app/build {projectPath}/build || 
docker cp build-container-{id}:/app/dist {projectPath}/build
5

Production Upload

Built files are uploaded to S3 under the dist/{id} prefix:
// deploy-service/src/utils/buildProject.ts:73
export function copyFinalDist(id: string) {
    const projectPath = path.join(process.cwd(), 'dist', `output/${id}/build`);
    const allFiles = getAllFiles(projectPath);
    allFiles.forEach(fullFilePath => {
        uploadFiles(`dist/${id}/` + fullFilePath.slice(projectPath.length + 1), fullFilePath);
    })
}

Troubleshooting

Upload Service Issues

Error: “Failed to clone repository”
  • Verify the repository URL is correct and accessible
  • Ensure the repository is public or you have access
  • Check your network connection
Error: “Upload error” in logs
  • Verify AWS credentials are set correctly
  • Check S3 bucket exists and is accessible
  • Verify IAM permissions include s3:PutObject

Deploy Service Issues

Error: “Redis Client Error”
  • Ensure Redis is running on port 6379
  • Check Redis is accepting connections: redis-cli ping
Error: Docker build failures
  • Verify Docker daemon is running
  • Ensure your project has a valid package.json
  • Check that npm run build works in your repository
  • Verify the build script outputs to /build or /dist
Error: “Error downloading file from S3”
  • Check AWS credentials are set for deploy service
  • Verify S3 bucket permissions
  • Ensure the upload service completed successfully

Production Considerations

This quickstart uses local development settings. For production deployments, consider:
  • Security: Don’t expose Redis without authentication
  • Scaling: Run multiple deploy service instances for parallel builds
  • Monitoring: Add logging and monitoring for build failures
  • Cleanup: Implement a cleanup strategy for old deployments in S3
  • Error Handling: Add retry logic and better error messages
  • Build Timeout: Set maximum build times to prevent hanging containers

Next Steps

API Reference

Explore all API endpoints and parameters

Build System

Learn about the Docker build system

Configuration

Customize bucket names, ports, and more

Deployment Guide

Deploy your frontend applications

Build docs developers (and LLMs) love