Skip to main content
Drift is a monorepo with two main services: a Next.js frontend and an Express API. This guide covers production deployment strategies.

Deployment Architecture

┌─────────────────┐
│   Next.js Web   │  (Vercel, Netlify, etc.)
│   Port 3000     │
└────────┬────────┘
         │ HTTP

┌─────────────────┐
│   Express API   │  (Railway, Render, Fly.io, etc.)
│   Port 3001     │
└────────┬────────┘


┌─────────────────┐
│  External APIs  │
│  - Nessie       │
│  - Gemini       │
│  - ElevenLabs   │
│  - Plaid        │
└─────────────────┘
Deploy the frontend to Vercel and the backend to Railway for optimal performance.

Deploy API to Railway

1

Create Railway project

  1. Sign up at railway.app
  2. Click “New Project” → “Deploy from GitHub repo”
  3. Select your Drift repository
2

Configure build settings

In Railway project settings:Root Directory: apps/apiBuild Command:
npm install && npm run build
Start Command:
npm start
Watch Paths: apps/api/**
3

Set environment variables

Add to Railway environment variables:
NESSIE_API_KEY=your_key
GEMINI_API_KEY=your_key
ELEVENLABS_API_KEY=your_key
PLAID_CLIENT_ID=your_id
PLAID_SECRET=your_secret
PLAID_ENV=production
PORT=3001
Use production API keys, not development/sandbox keys.
4

Deploy

Railway auto-deploys on push to your main branch. Note the generated URL:
https://drift-api-production.up.railway.app

Deploy Web to Vercel

1

Connect repository

  1. Sign up at vercel.com
  2. Click “Add New” → “Project”
  3. Import your Drift GitHub repository
2

Configure build settings

Framework Preset: Next.jsRoot Directory: apps/webBuild Command:
cd ../.. && npm run build --filter=web
Output Directory: apps/web/.nextInstall Command:
npm install
3

Set environment variables

In Vercel project settings → Environment Variables:
NEXT_PUBLIC_API_URL=https://drift-api-production.up.railway.app
Only variables prefixed with NEXT_PUBLIC_ are accessible in the browser.
4

Deploy

Click “Deploy”. Vercel builds and deploys automatically.Your production URL:
https://drift-production.vercel.app

Option 2: Docker Deployment

Deploy both services using Docker containers.

API Dockerfile

Create apps/api/Dockerfile:
FROM node:18-alpine AS builder

WORKDIR /app

# Copy package files
COPY package*.json ./
COPY apps/api/package*.json ./apps/api/

# Install dependencies
RUN npm install

# Copy source
COPY apps/api ./apps/api
COPY turbo.json ./

# Build
RUN npm run build --workspace=apps/api

# Production image
FROM node:18-alpine

WORKDIR /app

# Copy built assets
COPY --from=builder /app/apps/api/dist ./dist
COPY --from=builder /app/apps/api/package*.json ./

# Install production dependencies only
RUN npm install --production

EXPOSE 3001

CMD ["node", "dist/index.js"]

Web Dockerfile

Create apps/web/Dockerfile:
FROM node:18-alpine AS builder

WORKDIR /app

# Copy package files
COPY package*.json ./
COPY apps/web/package*.json ./apps/web/

# Install dependencies
RUN npm install

# Copy source
COPY apps/web ./apps/web
COPY turbo.json ./

# Build
ARG NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
RUN npm run build --workspace=apps/web

# Production image
FROM node:18-alpine

WORKDIR /app

# Copy built assets
COPY --from=builder /app/apps/web/.next ./.next
COPY --from=builder /app/apps/web/public ./public
COPY --from=builder /app/apps/web/package*.json ./

# Install production dependencies
RUN npm install --production

EXPOSE 3000

CMD ["npm", "start"]

Docker Compose

Create docker-compose.yml at the repository root:
version: '3.8'

services:
  api:
    build:
      context: .
      dockerfile: apps/api/Dockerfile
    ports:
      - "3001:3001"
    environment:
      - NESSIE_API_KEY=${NESSIE_API_KEY}
      - GEMINI_API_KEY=${GEMINI_API_KEY}
      - ELEVENLABS_API_KEY=${ELEVENLABS_API_KEY}
      - PLAID_CLIENT_ID=${PLAID_CLIENT_ID}
      - PLAID_SECRET=${PLAID_SECRET}
      - PLAID_ENV=production
      - PORT=3001
    restart: unless-stopped

  web:
    build:
      context: .
      dockerfile: apps/web/Dockerfile
      args:
        - NEXT_PUBLIC_API_URL=http://api:3001
    ports:
      - "3000:3000"
    environment:
      - NEXT_PUBLIC_API_URL=http://api:3001
    depends_on:
      - api
    restart: unless-stopped
Deploy with:
docker-compose up -d

Option 3: All-in-One Platforms

Render

1

Create Web Service

  • Build Command: cd apps/web && npm install && npm run build
  • Start Command: cd apps/web && npm start
  • Environment: NEXT_PUBLIC_API_URL=<your-api-url>
2

Create API Service

  • Build Command: cd apps/api && npm install && npm run build
  • Start Command: cd apps/api && npm start
  • Environment: Add all API keys

Fly.io

1

Install Fly CLI

curl -L https://fly.io/install.sh | sh
2

Deploy API

cd apps/api
fly launch
fly secrets set NESSIE_API_KEY=your_key GEMINI_API_KEY=your_key
fly deploy
3

Deploy Web

cd apps/web
fly launch
fly secrets set NEXT_PUBLIC_API_URL=https://drift-api.fly.dev
fly deploy

Production Considerations

Performance

Ensure Python 3.11+ is installed in your production environment:
# In Dockerfile
RUN apk add --no-cache python3 py3-pip
RUN pip3 install numpy pandas pydantic openai python-dotenv
The Python engine runs 100k simulations in ~500ms vs 3-5 seconds in TypeScript.
Implement Redis caching for repeated simulations:
// Example caching layer
const cacheKey = `sim:${hash(profile)}:${hash(goal)}`
const cached = await redis.get(cacheKey)
if (cached) return JSON.parse(cached)

const results = await runSimulation(profile, goal)
await redis.set(cacheKey, JSON.stringify(results), 'EX', 300) // 5 min TTL
return results
Vercel automatically serves Next.js static assets via CDN. For other platforms:
  • Configure CloudFront or Cloudflare
  • Set appropriate cache headers
  • Enable compression (gzip/brotli)

Security

Never expose API keys in client-side code!Only environment variables prefixed with NEXT_PUBLIC_ are sent to the browser. Keep sensitive keys server-side only.
In production, restrict CORS to your frontend domain:
// apps/api/src/index.ts
app.use(cors({
  origin: process.env.ALLOWED_ORIGINS?.split(',') || 'http://localhost:3000',
  credentials: true,
}))
Set ALLOWED_ORIGINS=https://drift-production.vercel.app in Railway.
Install and configure rate limiting:
npm install express-rate-limit --workspace=apps/api
import rateLimit from 'express-rate-limit'

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // 100 requests per window
})

app.use('/api/', limiter)
Add health checks that verify API connectivity:
// Verify Nessie API on startup
const response = await nessieService.getCustomers()
if (!response) {
  console.error('FATAL: Nessie API key invalid')
  process.exit(1)
}
Maintain separate API keys for each environment:
  • Development: Sandbox/test keys with generous rate limits
  • Staging: Production keys with test data
  • Production: Production keys with real data and strict limits

Monitoring

1

Add logging

Use a structured logging service like LogDNA, Datadog, or Papertrail:
import winston from 'winston'

const logger = winston.createLogger({
  level: 'info',
  format: winston.format.json(),
  transports: [
    new winston.transports.Console(),
    new winston.transports.File({ filename: 'error.log', level: 'error' }),
  ],
})
2

Monitor API usage

Track API key usage in provider dashboards:
3

Set up uptime monitoring

Use services like UptimeRobot or Better Uptime to monitor:
  • API health endpoint: https://your-api.com/health
  • Web app homepage: https://your-web.com
Set alerts for downtime > 1 minute.
4

Configure error tracking

Integrate Sentry for error monitoring:
npm install @sentry/node @sentry/nextjs
// apps/api/src/index.ts
import * as Sentry from '@sentry/node'

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
})

Scaling

For high-traffic scenarios, consider:
  • Horizontal scaling: Run multiple API instances behind a load balancer
  • Database: Move from in-memory storage to PostgreSQL for Plaid tokens
  • Queue: Use Bull/BullMQ for asynchronous simulation jobs
  • Compute: Offload heavy simulations to AWS Lambda or Cloud Functions

Environment Variables Checklist

Before deploying, ensure all required variables are set:
 NESSIE_API_KEY
 GEMINI_API_KEY
⚠️ ELEVENLABS_API_KEY (optional)
⚠️ PLAID_CLIENT_ID (optional)
⚠️ PLAID_SECRET (optional)
⚠️ PLAID_ENV (optional)
⚠️ OPENAI_API_KEY (optional)
 PORT (default: 3001)
⚠️ ALLOWED_ORIGINS (for CORS)

Deployment Checklist

1

Pre-deployment

  • All API keys configured
  • Production build successful locally
  • Linting passes: npm run lint
  • API health check responds
  • CORS configured for production domain
2

Deploy

  • API deployed and accessible
  • Web app deployed and accessible
  • Environment variables set in hosting platform
  • API URL updated in web app config
3

Post-deployment

  • Test full user flow in production
  • Verify API endpoints respond correctly
  • Check browser console for errors
  • Confirm voice features work (if enabled)
  • Monitor error logs for 24 hours

Troubleshooting

Ensure ALLOWED_ORIGINS includes your production frontend domain:
ALLOWED_ORIGINS=https://drift.vercel.app,https://www.drift.app
Verify environment variables are set in your hosting platform:
# Railway
railway variables

# Vercel
vercel env ls
Check build logs for missing dependencies or TypeScript errors:
# Test build locally
npm run build
Ensure Python is installed in production environment and NumPy dependencies are met.

Next Steps

Environment Variables

Review all configuration options

API Reference

Explore available endpoints

Performance Tuning

Optimize Monte Carlo simulation performance

API Keys

Configure external service credentials

Build docs developers (and LLMs) love