Skip to main content

Overview

This guide covers deploying Resonance to production environments. The application is designed to run on any Node.js 20+ hosting platform.
Before starting, complete the prerequisites and have all required service accounts ready.
The TTS inference engine must be deployed to Modal before deploying the Next.js application.

1. Install Modal CLI

pip install modal
modal setup  # Authenticate with your Modal account

2. Configure R2 Credentials

Update chatterbox_tts.py with your R2 bucket information:
chatterbox_tts.py
R2_BUCKET_NAME = "your-bucket-name"  # Replace with your R2 bucket
R2_ACCOUNT_ID = "your-account-id"    # Replace with your R2 account ID
These values must match your R2 setup exactly. The Modal container mounts this bucket read-only to access voice reference audio.

3. Create Modal Secrets

Create three secrets in your Modal dashboard:
1

cloudflare-r2

R2 API credentials for bucket mounting.
modal secret create cloudflare-r2 \
  AWS_ACCESS_KEY_ID="your-r2-access-key" \
  AWS_SECRET_ACCESS_KEY="your-r2-secret-key"
R2 uses S3-compatible API, so the keys are named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
2

chatterbox-api-key

API key to protect the TTS endpoint (use any strong random string).
modal secret create chatterbox-api-key \
  CHATTERBOX_API_KEY="your-secure-random-string"
Use a cryptographically secure random string. This protects your Modal endpoint from unauthorized access.
3

hf-token

Hugging Face token for downloading Chatterbox model weights.
modal secret create hf-token \
  HF_TOKEN="your-huggingface-token"
Get your token from Hugging Face settings.

4. Deploy to Modal

modal deploy chatterbox_tts.py
Expected output:
✓ Created web function Chatterbox.serve
✓ App deployed! 🎉

View your app at https://your-workspace--chatterbox-tts-serve.modal.run
The chatterbox_tts.py script deploys:
  • GPU container: NVIDIA A10G (configured at chatterbox_tts.py:84)
  • Model: Chatterbox TTS v0.1.6 with Turbo architecture
  • Scale-down window: 5 minutes of inactivity before GPU is released
  • Max concurrent: 10 requests can be processed simultaneously
  • R2 mount: Read-only access to your audio bucket at /r2
The endpoint accepts POST requests to /generate with voice cloning parameters.

5. Test the Endpoint

Verify the deployment works:
curl -X POST "https://your-workspace--chatterbox-tts-serve.modal.run/generate" \
  -H "Content-Type: application/json" \
  -H "X-Api-Key: your-chatterbox-api-key" \
  -d '{
    "prompt": "Hello from Chatterbox!",
    "voice_key": "voices/system/default.wav",
    "temperature": 0.8,
    "top_p": 0.95,
    "top_k": 1000,
    "repetition_penalty": 1.2
  }' \
  --output test.wav
If successful, you’ll have a test.wav file with generated audio.
The first request after deployment or inactivity will take 15-30 seconds due to cold start. See Troubleshooting for details.

6. Generate API Types

After deploying Modal, generate the TypeScript client:
npm run sync-api
This fetches the OpenAPI spec from your Modal endpoint and generates type-safe client code in src/types/.

Railway Deployment

Railway provides one-click deployment with built-in database and automatic HTTPS.

Quick Deploy

1

Fork the Repository

Fork resonance to your GitHub account.
2

Create Railway Project

  1. Go to Railway and create a new project
  2. Select “Deploy from GitHub repo”
  3. Choose your forked repository
  4. Railway will detect Next.js automatically
3

Add PostgreSQL

  1. Click “New” → “Database” → “Add PostgreSQL”
  2. Railway automatically sets DATABASE_URL environment variable
  3. The Prisma adapter connects automatically
4

Configure Environment Variables

Add all required variables in Railway’s Variables tab:
# Automatically set by Railway
DATABASE_URL=${{Postgres.DATABASE_URL}}

# Modal TTS
CHATTERBOX_API_URL=https://your-workspace--chatterbox-tts-serve.modal.run
CHATTERBOX_API_KEY=your-api-key

# Clerk
CLERK_SECRET_KEY=sk_live_...
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_live_...
NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up

# Cloudflare R2
R2_ACCOUNT_ID=your-account-id
R2_ACCESS_KEY_ID=your-access-key
R2_SECRET_ACCESS_KEY=your-secret-key
R2_BUCKET_NAME=resonance-prod

# Polar
POLAR_ACCESS_TOKEN=your-token
POLAR_SERVER=production
POLAR_PRODUCT_ID=your-product-id

# Railway provides this automatically
APP_URL=${{RAILWAY_PUBLIC_DOMAIN}}
5

Configure Build Settings

Railway auto-detects Next.js, but verify:
  • Build command: npm run build
  • Start command: npm run start
  • Watch paths: Leave default (triggers rebuild on push)
6

Run Database Migrations

After the first deployment, open Railway’s service shell and run:
npx prisma migrate deploy
npx prisma db seed
This creates tables and seeds the 20 system voices.
Railway provides a free tier with 5monthlycredit.ProductionusagewillrequiretheHobbyplan(5 monthly credit. Production usage will require the Hobby plan (5/month + usage).

Docker Deployment

For self-hosted environments, use Docker with a custom Dockerfile.

Create Dockerfile

Create Dockerfile in your project root:
Dockerfile
FROM node:20-slim AS base

# Install OpenSSL for Prisma
RUN apt-get update -y && apt-get install -y openssl

WORKDIR /app

# Dependencies
FROM base AS deps
COPY package*.json ./
RUN npm ci

# Builder
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Environment variables required for build
ARG DATABASE_URL
ARG NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY
ARG NEXT_PUBLIC_CLERK_SIGN_IN_URL=/sign-in
ARG NEXT_PUBLIC_CLERK_SIGN_UP_URL=/sign-up

ENV SKIP_ENV_VALIDATION=true

RUN npm run build

# Runner
FROM base AS runner

ENV NODE_ENV=production

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder /app/prisma ./prisma
COPY --from=builder /app/node_modules/.prisma ./node_modules/.prisma
COPY --from=builder /app/node_modules/@prisma ./node_modules/@prisma

USER nextjs

EXPOSE 3000

ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "server.js"]

Update next.config.ts

Enable standalone output:
next.config.ts
const nextConfig: NextConfig = {
  output: 'standalone',  // Add this line
  devIndicators: false,
  experimental: {
    proxyClientMaxBodySize: "20mb",
  },
};

Build and Run

# Build image
docker build -t resonance:latest .

# Run container
docker run -d \
  -p 3000:3000 \
  --env-file .env.production \
  --name resonance \
  resonance:latest

Docker Compose

For local development with database:
docker-compose.yml
version: '3.8'

services:
  app:
    build: .
    ports:
      - "3000:3000"
    env_file: .env.production
    depends_on:
      - db
    environment:
      DATABASE_URL: postgresql://postgres:postgres@db:5432/resonance

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: resonance
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

volumes:
  postgres_data:
Run with:
docker-compose up -d

# Run migrations
docker-compose exec app npx prisma migrate deploy
docker-compose exec app npx prisma db seed

Other Platforms

Vercel

Vercel deployment requires Vercel Pro for serverless function timeouts. TTS generation can exceed the 10s Hobby limit.
  1. Import project from GitHub
  2. Add environment variables
  3. Deploy automatically on push
Note: Database migrations must be run manually in Vercel’s shell or via CI/CD.

Generic Node.js Host

Any platform supporting Node.js 20+ works:
# Install dependencies
npm ci --production=false

# Run migrations
npx prisma migrate deploy
npx prisma db seed

# Build
npm run build

# Start (requires all env vars)
PORT=3000 npm run start
Supported platforms: Render, Fly.io, DigitalOcean App Platform, AWS ECS, Google Cloud Run, Azure App Service

Post-Deployment Steps

1

Verify Modal Connectivity

Test the connection from your app to Modal:
  1. Sign in to your deployed application
  2. Navigate to Text-to-Speech
  3. Generate a sample with any system voice
  4. Check browser console and server logs for errors
2

Test R2 Storage

Verify audio uploads work:
  1. Clone a voice or record a new one
  2. Check R2 bucket for voices/custom/<id> object
  3. Generate TTS with the custom voice
  4. Verify playback works
3

Configure Webhooks

If using Polar billing:
  1. Set webhook URL: https://yourdomain.com/api/webhooks/polar
  2. Enable events: checkout.completed, subscription.updated
  3. Test with sandbox mode checkout
4

Set Up Monitoring

Configure Sentry (already integrated):
  1. Verify SENTRY_DSN in environment
  2. Check error reporting in Sentry dashboard
  3. Set up alerts for critical errors
Sentry integration is configured in next.config.ts:1 and uploads source maps automatically in CI.

Next Steps

Troubleshooting

Encountering issues? Check common problems and solutions

Build docs developers (and LLMs) love