Skip to main content

Overview

Watch N Chill can be deployed to various platforms. This guide covers production considerations, CORS configuration, Redis hosting, and platform-specific instructions.

Pre-Deployment Checklist

1

Environment Variables

Ensure all required production variables are set:
  • NODE_ENV=production - Enables production optimizations
  • REDIS_URL - Connection to production Redis instance
  • ALLOWED_ORIGINS - Comma-separated list of allowed domains
  • PORT - HTTP port (usually auto-configured by platform)
Optional but recommended:
  • RATE_LIMIT_WINDOW_MS - Rate limit window (default: 60000)
  • RATE_LIMIT_MAX_REQUESTS - Max requests per window (default: 360)
  • RATE_LIMIT_SOCKET_MAX_PER_IP - Max concurrent connections (default: 10)
2

Redis Instance

Set up a production Redis instance. See Redis Hosting Options.
3

CORS Configuration

Configure ALLOWED_ORIGINS with your production domains:
ALLOWED_ORIGINS=https://watchnchill.app,https://www.watchnchill.app
4

Build Test

Verify the application builds successfully:
npm run build

Redis Hosting Options

Upstash provides serverless Redis with automatic TLS configuration:
1

Create Database

  1. Sign up at upstash.com
  2. Create a new Redis database
  3. Choose a region close to your application
2

Get Connection URL

Copy the connection URL from the dashboard:
rediss://default:[email protected]:6379
3

Configure Application

Set REDIS_URL in your deployment platform:
REDIS_URL=rediss://default:[email protected]:6379
Watch N Chill automatically detects Upstash URLs (containing upstash.io) and configures TLS settings.
Upstash Features:
  • Free tier with 10,000 commands/day
  • Global edge replication
  • Built-in metrics and monitoring
  • Automatic backups

Redis Cloud

Redis Cloud (by Redis Labs) offers managed Redis:
  1. Sign up at redis.com/cloud
  2. Create a database
  3. Copy the connection URL
  4. Set REDIS_URL in your environment

Self-Hosted Redis

For self-hosted deployments, ensure:
  • Redis persistence is enabled (appendonly yes)
  • Backups are configured
  • Access is restricted by IP or VPN
  • TLS is enabled if exposed to internet

CORS Configuration

Incorrect CORS configuration will prevent Socket.IO from connecting in production!

Setting ALLOWED_ORIGINS

The ALLOWED_ORIGINS environment variable controls Socket.IO CORS:
# Single domain
ALLOWED_ORIGINS=https://watchnchill.app

# Multiple domains (comma-separated, no spaces)
ALLOWED_ORIGINS=https://watchnchill.app,https://www.watchnchill.app

# Include staging environment
ALLOWED_ORIGINS=https://watchnchill.app,https://staging.watchnchill.app

CORS Implementation

The Socket.IO CORS configuration in src/backend/socket/index.ts:20-28:
io = new IOServer(httpServer, {
  cors: {
    origin:
      process.env.NODE_ENV === 'production'
        ? process.env.ALLOWED_ORIGINS?.split(',') || []
        : ['http://localhost:3000'],
    methods: ['GET', 'POST'],
    credentials: true,
  },
  path: '/api/socket/io',
});
Key points:
  • Development mode auto-allows http://localhost:3000
  • Production requires explicit ALLOWED_ORIGINS
  • Origins are split by comma
  • Credentials are enabled for cookie support

Rate Limiting

Rate limiting is automatically enabled in production (NODE_ENV=production):

HTTP Rate Limiting

Implemented in server.ts:56-67:
  • Window: 60 seconds (configurable via RATE_LIMIT_WINDOW_MS)
  • Limit: 360 requests (configurable via RATE_LIMIT_MAX_REQUESTS)
  • Per: IP address
  • Headers: X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After

Socket.IO Connection Limiting

Implemented in src/backend/rate-limit.ts:86-102:
  • Limit: 10 concurrent connections per IP (configurable via RATE_LIMIT_SOCKET_MAX_PER_IP)
  • Behavior: New connections rejected if limit exceeded
  • Fallback: Allows connections if Redis is unavailable

Adjusting Rate Limits

# More restrictive
RATE_LIMIT_WINDOW_MS=60000
RATE_LIMIT_MAX_REQUESTS=120
RATE_LIMIT_SOCKET_MAX_PER_IP=5

# More permissive
RATE_LIMIT_WINDOW_MS=60000
RATE_LIMIT_MAX_REQUESTS=600
RATE_LIMIT_SOCKET_MAX_PER_IP=20

Platform-Specific Guides

Render

Render provides easy deployment with Docker support:
1

Create Web Service

  1. Go to render.com
  2. Click “New +” → “Web Service”
  3. Connect your GitHub repository
2

Configure Service

  • Name: watchnchill
  • Environment: Docker
  • Region: Choose closest to your users
  • Branch: main
  • Dockerfile Path: ./Dockerfile
3

Set Environment Variables

Add in Render dashboard:
NODE_ENV=production
REDIS_URL=rediss://default:[email protected]:6379
ALLOWED_ORIGINS=https://your-app.onrender.com
4

Deploy

Click “Create Web Service” - Render will build and deploy automatically.
Render Notes:
  • Free tier available (with cold starts)
  • Automatic SSL certificates
  • Health checks via /health endpoint
  • Auto-deploys on git push

Railway

Railway offers simple deployment with built-in Redis:
1

Create Project

  1. Go to railway.app
  2. Click “New Project” → “Deploy from GitHub repo”
2

Add Redis

  1. Click “New” → “Database” → “Add Redis”
  2. Railway automatically sets REDIS_URL
3

Configure Service

Set environment variables:
NODE_ENV=production
ALLOWED_ORIGINS=https://your-app.railway.app
4

Deploy

Railway auto-detects Dockerfile and deploys.
Railway Features:
  • $5/month free credit
  • Built-in Redis plugin
  • Automatic domains
  • Zero-config deployments

Vercel

Vercel requires external Redis (like Upstash) since it’s a serverless platform. Socket.IO connections work but may have cold start delays.
1

Install Vercel CLI

npm i -g vercel
2

Configure for Vercel

Add to vercel.json:
vercel.json
{
  "version": 2,
  "builds": [
    {
      "src": "package.json",
      "use": "@vercel/node"
    }
  ],
  "routes": [
    {
      "src": "/(.*)",
      "dest": "/server.ts"
    }
  ]
}
3

Deploy

vercel --prod
Set environment variables in Vercel dashboard:
NODE_ENV=production
REDIS_URL=rediss://default:[email protected]:6379
ALLOWED_ORIGINS=https://your-app.vercel.app

DigitalOcean App Platform

1

Create App

  1. Go to cloud.digitalocean.com/apps
  2. Click “Create App” → “GitHub”
2

Configure

  • Resource Type: Web Service
  • Build Command: npm run build
  • Run Command: npm start
3

Add Managed Redis

  1. Click “Add Resource” → “Database”
  2. Select “Redis” → Create
  3. DigitalOcean auto-configures REDIS_URL
4

Set Environment Variables

NODE_ENV=production
ALLOWED_ORIGINS=https://your-app.ondigitalocean.app

Fly.io

Fly.io provides edge deployment with built-in Redis (Upstash integration):
1

Install flyctl

curl -L https://fly.io/install.sh | sh
2

Initialize App

fly launch
This creates fly.toml configuration.
3

Add Redis

fly redis create
This provisions Upstash Redis and sets REDIS_URL.
4

Set Secrets

fly secrets set NODE_ENV=production
fly secrets set ALLOWED_ORIGINS=https://your-app.fly.dev
5

Deploy

fly deploy

Docker on VPS (AWS EC2, Hetzner, etc.)

For self-hosted deployments:
1

Set Up Server

Install Docker and Docker Compose:
# Ubuntu/Debian
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

# Install Docker Compose
sudo apt-get install docker-compose-plugin
2

Clone Repository

git clone https://github.com/yourusername/watchnchill.git
cd watchnchill
3

Configure Environment

Create .env file:
.env
NODE_ENV=production
REDIS_URL=redis://redis:6379
ALLOWED_ORIGINS=https://your-domain.com
4

Deploy with Docker Compose

docker compose -f docker-compose.prod.yml up -d
5

Set Up Reverse Proxy

Configure nginx for SSL and reverse proxy:
/etc/nginx/sites-available/watchnchill
server {
    listen 80;
    server_name your-domain.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name your-domain.com;
    
    ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
    
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Monitoring and Health Checks

Health Endpoint

The application exposes a health check endpoint at /health (implemented in server.ts:48-54):
curl https://your-app.com/health
# Response: heart beating
Use for:
  • Container health checks
  • Load balancer health checks
  • Uptime monitoring (UptimeRobot, Pingdom)

Monitoring Redis

Check Redis connection:
# Via Docker
docker compose exec redis redis-cli ping

# Via Upstash Dashboard
# View metrics at console.upstash.com
Key metrics to monitor:
  • Connection count
  • Memory usage
  • Command rate
  • Error rate

Application Logs

The application logs important events:
# View logs in Docker
docker compose logs -f app

# Key log messages:
> Initializing Redis connection...
> Redis connected successfully
> Ready on http://0.0.0.0:3000
> Socket.IO server running on path: /api/socket/io
Configure log aggregation:
  • CloudWatch (AWS)
  • Logtail/BetterStack
  • Datadog
  • New Relic

Performance Optimization

CDN for Static Assets

Use a CDN for Next.js static assets:
next.config.ts
const nextConfig = {
  assetPrefix: process.env.CDN_URL || '',
  // ... other config
};

Redis Connection Pooling

The application uses ioredis with optimized settings (in src/backend/redis/client.ts:4-36):
maxRetriesPerRequest: 3,
enableReadyCheck: false,
lazyConnect: true,
keepAlive: 30000,
connectTimeout: 10000,
commandTimeout: 5000,

Socket.IO Optimization

For high-traffic deployments:
  1. Use Redis adapter for horizontal scaling:
    import { createAdapter } from '@socket.io/redis-adapter';
    io.adapter(createAdapter(pubClient, subClient));
    
  2. Enable compression:
    const io = new IOServer(httpServer, {
      perMessageDeflate: true,
    });
    

Security Best Practices

  • Never expose Redis port to the internet
  • Always use TLS for Redis in production
  • Regularly rotate Redis passwords
  • Keep dependencies updated
  • Use rate limiting (enabled by default)

Security Checklist

  • NODE_ENV=production is set
  • ALLOWED_ORIGINS is configured
  • Redis uses TLS (rediss://)
  • Redis password is strong
  • Rate limiting is enabled
  • HTTPS is enabled (via reverse proxy/platform)
  • Security headers are set (via reverse proxy)
  • Dependencies are up to date

Troubleshooting Production Issues

Socket.IO Won’t Connect

Check CORS:
# Verify ALLOWED_ORIGINS is set
echo $ALLOWED_ORIGINS

# Test from browser console:
fetch('https://your-app.com/health').then(r => r.text()).then(console.log)
Check WebSocket support:
  • Ensure reverse proxy forwards Upgrade header
  • Check firewall allows WebSocket connections
  • Verify platform supports persistent connections

Redis Connection Errors

# Test Redis connection
redis-cli -u $REDIS_URL ping

# Check application logs for:
> Redis connection error: ...
> Redis connected successfully  ← Should see this

High Memory Usage

Monitor Redis memory:
redis-cli info memory
Optimize room TTLs in src/backend/redis/room-handler.ts if needed.

Rate Limit Too Restrictive

Adjust limits:
RATE_LIMIT_WINDOW_MS=120000  # 2 minutes
RATE_LIMIT_MAX_REQUESTS=720   # Double the default

Scaling Considerations

For high-traffic deployments:
  1. Horizontal Scaling: Use Redis adapter for Socket.IO
  2. Redis Scaling: Use Redis Cluster or replicas
  3. Load Balancing: Use sticky sessions for Socket.IO
  4. CDN: Serve static assets from CDN
  5. Monitoring: Set up APM (Application Performance Monitoring)

Next Steps

Environment Variables

Complete environment variable reference

Docker Deployment

Deploy with Docker and Docker Compose

Architecture

Understand the system architecture

Rate Limiting

Configure rate limiting for production

Build docs developers (and LLMs) love