Skip to main content
This guide covers deploying NAVAI to production, including environment configuration, security best practices, and scaling considerations.

Production Deployment Checklist

Before deploying to production, ensure you have:
  • Valid OpenAI API key with sufficient credits
  • Environment variables configured securely
  • CORS properly configured for production domains
  • HTTPS enabled for all endpoints
  • Error handling and logging in place
  • Backend API key security configured
  • Rate limiting implemented
  • Monitoring and alerting set up

Environment Variable Management

Backend Production Variables

Create a production .env file with secure settings:
.env.production
# Required: OpenAI API Configuration
OPENAI_API_KEY=sk-proj-...
OPENAI_REALTIME_MODEL=gpt-realtime
OPENAI_REALTIME_VOICE=marin
OPENAI_REALTIME_INSTRUCTIONS=You are a helpful assistant.

# Optional: Language & Voice Customization
OPENAI_REALTIME_LANGUAGE=English
OPENAI_REALTIME_VOICE_ACCENT=neutral American English
OPENAI_REALTIME_VOICE_TONE=friendly and professional

# Session Configuration
OPENAI_REALTIME_CLIENT_SECRET_TTL=600

# Functions Configuration
NAVAI_FUNCTIONS_FOLDERS=dist/ai/...

# Security: NEVER allow frontend API key in production
NAVAI_ALLOW_FRONTEND_API_KEY=false

# CORS: Specify exact production origins
NAVAI_CORS_ORIGIN=https://yourdomain.com,https://app.yourdomain.com

# Server Configuration
PORT=3000
NODE_ENV=production
Critical Security Settings:
  • Set NAVAI_ALLOW_FRONTEND_API_KEY=false in production
  • Never commit .env files to version control
  • Use specific CORS origins, not wildcards
  • Keep your OpenAI API key secret

Frontend Production Variables

.env.production
# Backend API URL (production)
NAVAI_API_URL=https://api.yourdomain.com

# Functions and Routes
NAVAI_FUNCTIONS_FOLDERS=src/ai/functions-modules
NAVAI_ROUTES_FILE=src/ai/routes.ts

# Optional: Model override
NAVAI_REALTIME_MODEL=gpt-realtime

Security Best Practices

API Key Protection

1

Server-side API key only

Always keep your OpenAI API key on the backend:
server.ts
import { registerNavaiExpressRoutes } from "@navai/voice-backend";

const backendOptions = {
  openaiApiKey: process.env.OPENAI_API_KEY,
  // CRITICAL: Disable frontend API key in production
  allowApiKeyFromRequest: false
};

registerNavaiExpressRoutes(app, { backendOptions });
2

Verify environment configuration

From packages/voice-backend/src/index.ts:114-132:
// NAVAI automatically sets allowApiKeyFromRequest=false
// when OPENAI_API_KEY is configured
const hasBackendApiKey = Boolean(env.OPENAI_API_KEY?.trim());
const allowFrontendApiKeyFromEnv = 
  (env.NAVAI_ALLOW_FRONTEND_API_KEY ?? "false").toLowerCase() === "true";
const allowFrontendApiKey = allowFrontendApiKeyFromEnv || !hasBackendApiKey;
3

Use environment secrets management

Use your platform’s secrets manager:Vercel:
vercel env add OPENAI_API_KEY
Heroku:
heroku config:set OPENAI_API_KEY=sk-...
AWS: Use AWS Secrets Manager or Parameter StoreDocker:
docker run -e OPENAI_API_KEY=sk-... your-image

CORS Configuration

Never use NAVAI_CORS_ORIGIN=* in production. This allows any website to use your API.
Proper CORS configuration:
server.ts
import cors from "cors";
import express from "express";

const app = express();

// Production: Specific origins only
const allowedOrigins = [
  "https://yourdomain.com",
  "https://app.yourdomain.com",
  "https://www.yourdomain.com"
];

app.use(cors({
  origin: (origin, callback) => {
    // Allow requests with no origin (mobile apps, Postman, etc.)
    if (!origin) return callback(null, true);
    
    if (allowedOrigins.includes(origin)) {
      callback(null, true);
    } else {
      callback(new Error('Not allowed by CORS'));
    }
  },
  credentials: true
}));
Or use environment variable:
.env.production
NAVAI_CORS_ORIGIN=https://yourdomain.com,https://app.yourdomain.com

HTTPS Requirements

WebRTC requires HTTPS in production for microphone access.
Requirements:
  • Backend API must use HTTPS
  • Frontend must be served over HTTPS
  • Certificate must be valid (not self-signed)
Common deployment platforms with HTTPS:
  • Vercel (automatic)
  • Netlify (automatic)
  • AWS (configure ALB/CloudFront)
  • Heroku (automatic)
  • Railway (automatic)

Rate Limiting

Protect your API from abuse:
server.ts
import rateLimit from "express-rate-limit";

// Limit client secret creation
const clientSecretLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // Limit each IP to 100 requests per windowMs
  message: "Too many requests, please try again later."
});

app.post("/navai/realtime/client-secret", clientSecretLimiter, ...);

// Limit function execution
const functionLimiter = rateLimit({
  windowMs: 1 * 60 * 1000, // 1 minute
  max: 300, // 300 requests per minute
  message: "Too many function calls, please slow down."
});

app.post("/navai/functions/execute", functionLimiter, ...);

Deployment Platforms

Vercel Deployment

1

Install Vercel CLI

npm i -g vercel
2

Configure vercel.json

vercel.json
{
  "buildCommand": "npm run build",
  "devCommand": "npm run dev",
  "installCommand": "npm install",
  "env": {
    "OPENAI_API_KEY": "@openai-api-key",
    "NAVAI_CORS_ORIGIN": "@navai-cors-origin"
  }
}
3

Deploy

vercel --prod

Docker Deployment

Create a Dockerfile:
Dockerfile
FROM node:20-alpine

WORKDIR /app

# Copy package files
COPY package*.json ./
RUN npm ci --only=production

# Copy application
COPY . .

# Build if needed
RUN npm run build

# Expose port
EXPOSE 3000

# Start server
CMD ["npm", "start"]
Build and run:
# Build image
docker build -t navai-backend .

# Run container
docker run -d \
  -p 3000:3000 \
  -e OPENAI_API_KEY=sk-... \
  -e NAVAI_CORS_ORIGIN=https://yourdomain.com \
  navai-backend

AWS Deployment

  1. Install EB CLI: pip install awsebcli
  2. Initialize: eb init
  3. Create environment: eb create production
  4. Set environment variables: eb setenv OPENAI_API_KEY=sk-...
  5. Deploy: eb deploy
  1. Build and push Docker image to ECR
  2. Create task definition with environment variables
  3. Create service with ALB for HTTPS
  4. Configure auto-scaling
  1. Package application for Lambda
  2. Configure API Gateway
  3. Set up environment variables in Lambda
  4. Configure custom domain with HTTPS

Heroku Deployment

1

Create Heroku app

heroku create navai-backend
2

Set environment variables

heroku config:set OPENAI_API_KEY=sk-...
heroku config:set NAVAI_CORS_ORIGIN=https://yourdomain.com
heroku config:set NODE_ENV=production
3

Deploy

git push heroku main

Scaling Considerations

Client Secret TTL

Balance security and user experience:
.env
# Short TTL (10 min) - More secure, more frequent renewals
OPENAI_REALTIME_CLIENT_SECRET_TTL=600

# Medium TTL (1 hour) - Balanced
OPENAI_REALTIME_CLIENT_SECRET_TTL=3600

# Long TTL (2 hours) - Max allowed by OpenAI
OPENAI_REALTIME_CLIENT_SECRET_TTL=7200
TTL must be between 10 and 7200 seconds (2 hours).

Horizontal Scaling

NAVAI backends are stateless and can be scaled horizontally:
docker-compose.yml
version: '3.8'
services:
  navai-backend:
    image: navai-backend:latest
    deploy:
      replicas: 3
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - NAVAI_CORS_ORIGIN=${NAVAI_CORS_ORIGIN}
    ports:
      - "3000"
  
  nginx:
    image: nginx:alpine
    ports:
      - "443:443"
    depends_on:
      - navai-backend

Load Balancing

Distribute traffic across multiple instances: Nginx configuration:
nginx.conf
upstream navai_backend {
    server backend1:3000;
    server backend2:3000;
    server backend3:3000;
}

server {
    listen 443 ssl;
    server_name api.yourdomain.com;
    
    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    
    location / {
        proxy_pass http://navai_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Monitoring and Logging

Implement comprehensive monitoring:
server.ts
import express from "express";
import morgan from "morgan";
import { registerNavaiExpressRoutes } from "@navai/voice-backend";

const app = express();

// Request logging
app.use(morgan('combined'));

// Health check endpoint
app.get('/health', (req, res) => {
  res.json({ 
    ok: true, 
    timestamp: new Date().toISOString(),
    uptime: process.uptime()
  });
});

// Metrics endpoint
app.get('/metrics', (req, res) => {
  res.json({
    memory: process.memoryUsage(),
    uptime: process.uptime(),
    // Add custom metrics
  });
});

registerNavaiExpressRoutes(app);

// Error tracking
app.use((err, req, res, next) => {
  console.error('Error:', {
    message: err.message,
    stack: err.stack,
    url: req.url,
    method: req.method,
    timestamp: new Date().toISOString()
  });
  
  // Send to error tracking service (Sentry, etc.)
  
  res.status(500).json({ error: 'Internal server error' });
});

Cost Optimization

Monitor OpenAI API usage to optimize costs:
  • Set appropriate client secret TTL
  • Implement session timeout on frontend
  • Use rate limiting to prevent abuse
  • Monitor token usage in OpenAI dashboard

Mobile App Deployment

iOS Deployment

1

Update production API URL

.env.production
NAVAI_API_URL=https://api.yourdomain.com
2

Build for production

eas build --platform ios --profile production
3

Submit to App Store

eas submit --platform ios

Android Deployment

1

Update production configuration

.env.production
NAVAI_API_URL=https://api.yourdomain.com
2

Build production APK/AAB

eas build --platform android --profile production
3

Submit to Play Store

eas submit --platform android

Testing Production Configuration

Before going live:
1

Test environment variables

# Verify all required variables are set
node -e "console.log(process.env.OPENAI_API_KEY ? 'API key set' : 'Missing API key')"
2

Test CORS

curl -H "Origin: https://yourdomain.com" \
     -H "Access-Control-Request-Method: POST" \
     -X OPTIONS \
     https://api.yourdomain.com/navai/realtime/client-secret
3

Test client secret creation

curl -X POST https://api.yourdomain.com/navai/realtime/client-secret \
     -H "Content-Type: application/json" \
     -d '{}'
4

Test from production frontend

Deploy frontend and test end-to-end voice session

Post-Deployment Checklist

After deployment:
  • All endpoints respond correctly
  • HTTPS is working
  • CORS allows production domain
  • Microphone permissions work
  • Voice sessions connect successfully
  • Functions execute properly
  • Navigation works as expected
  • Monitoring is active
  • Error tracking is configured
  • Logs are being collected

Troubleshooting Production Issues

Check:
  • OpenAI API key is valid
  • API key has sufficient credits
  • Backend can reach OpenAI API
  • No network/firewall issues
Verify:
  • NAVAI_CORS_ORIGIN includes production domain
  • Protocol matches (HTTPS)
  • No trailing slashes in origin
Ensure:
  • Frontend is served over HTTPS
  • Browser permissions are granted
  • Mobile app has RECORD_AUDIO permission

Next Steps

Debugging

Learn how to debug production issues

Voice Customization

Optimize voice settings for production

Build docs developers (and LLMs) love