Overview
The backend is a Node.js/Express server that:- Handles OpenAI streaming responses
- Serves the widget JavaScript bundle
- Provides REST API endpoints for chat
- Manages authentication via API keys
- Stores conversations in Convex
Prerequisites
- Convex deployed with production
CONVEX_URL - OpenAI API key
- Platform account (Render/Railway/Fly.io) or Node.js host
Build Commands
Build Command
- Installs all dependencies (including dev dependencies needed for build)
- Builds the widget bundle
- Compiles TypeScript backend code to
backend/dist/
Start Command
node dist/server.js in the backend workspace.
Environment Variables
Required
Optional
Environment Variable Details
CONVEX_URL
Your production Convex deployment URL fromnpx convex deploy.
OPENAI_API_KEY
Your OpenAI API key. Keep this secret and server-side only.OPENAI_MODEL
The OpenAI model to use. Recommended:gpt-4.1-mini for fast responses.
WIDGET_API_KEY
Strong random secret used to authenticate widget and headless API requests. Generate with:ADMIN_API_KEY
Optional. Required only if you want to use/v1/admin/* endpoints to fetch conversations via API. Generate with:
CORS_ORIGIN
Comma-separated list of allowed origins. Never use* in production. Examples:
PORT
Port the server listens on. Default:4000.
MAX_HISTORY_MESSAGES
Maximum number of conversation messages to include in the OpenAI context window. Default:30.
This controls how much conversation history is sent to the AI model. Higher values provide more context but increase API costs and latency.
RATE_LIMIT_WINDOW_MS
Time window for rate limiting in milliseconds. Default:60000 (1 minute).
RATE_LIMIT_MAX_REQUESTS
Maximum number of requests allowed per IP address within the rate limit window. Default:30.
WIDGET_BUNDLE_PATH
Path to the compiled widget bundle. Default:../widget/dist/chat-widget.js.
Usually auto-detected when running from the project root.
Platform-Specific Instructions
Render
- Create new Web Service
- Connect your repository
- Settings:
- Root Directory: Leave empty (use repo root)
- Build Command:
npm install --include=dev && npm run build:backend - Start Command:
npm run start:backend
- Add environment variables listed above
- Deploy
Railway
- Create new project from GitHub repo
- Settings:
- Build Command:
npm install --include=dev && npm run build:backend - Start Command:
npm run start:backend
- Build Command:
- Add environment variables in Variables tab
- Deploy
Fly.io
- Install flyctl CLI
- Run
fly launchin repo root - Configure
fly.toml:
- Set secrets:
- Deploy:
fly deploy
Generic Node.js Host
- SSH into your server
- Clone repository
- Install dependencies:
npm install --include=dev - Build:
npm run build:backend - Create
.envfile with environment variables - Run with process manager (PM2 recommended):
API Endpoints
After deployment, your backend exposes:GET /health- Health checkGET /widget/chat-widget.js- Widget bundlePOST /chat- Legacy streaming endpointPOST /v1/chat- Headless JSON responsePOST /v1/chat/stream- Headless NDJSON streamGET /v1/openapi.json- OpenAPI specGET /v1/admin/conversations- Admin endpoint (requires ADMIN_API_KEY)GET /v1/admin/conversations/:id- Admin endpoint (requires ADMIN_API_KEY)
Verify Deployment
Test your backend:Security Features
The backend includes:- Rate limiting (configurable via environment variables)
- Timing-safe API key comparison
- Security headers (X-Frame-Options, X-Content-Type-Options, etc.)
- HSTS in production
- CORS origin validation
- Input validation using Zod schemas
Next Steps
- Deploy the dashboard (optional)
- Embed the widget on your website