Skip to main content
Deploy the Predictive Maintenance System to production cloud platforms with automatic scaling, monitoring, and high availability.

Architecture Overview

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   Frontend      │     │   Backend       │     │   Database      │
│   (Vercel)      │────▶│   (Render)      │────▶│ (InfluxDB Cloud)│
│   React + Vite  │     │   FastAPI       │     │   Time-Series   │
└─────────────────┘     └─────────────────┘     └─────────────────┘
ComponentTechnologyHostingStatus
FrontendReact 18 + ViteVercel✅ Live
BackendFastAPI + DockerRender✅ Live
DatabaseInfluxDB 2.xInfluxDB Cloud✅ Live

Live Deployment

InfluxDB Cloud Setup

Set up the time-series database for sensor data storage.
1

Create InfluxDB Cloud account

  1. Go to InfluxDB Cloud
  2. Sign up for a free account
  3. Select AWS us-east-1 region (recommended)
2

Create a bucket

  1. Navigate to Load Data → Buckets
  2. Click Create Bucket
  3. Name: sensor_data
  4. Retention: 30 days (adjust as needed)
3

Generate API token

  1. Navigate to Load Data → API Tokens
  2. Click Generate API Token → Read/Write Token
  3. Grant permissions:
    • Read access to sensor_data bucket
    • Write access to sensor_data bucket
  4. Save the token securely (you won’t see it again)
4

Note your credentials

Copy these values for later use:
  • URL: https://us-east-1-1.aws.cloud2.influxdata.com
  • Organization ID: Found under User → About
  • Bucket: sensor_data
  • Token: The token you generated

Backend Deployment (Render)

Deploy the FastAPI backend as a Docker container on Render.
1

Create Render account

  1. Go to Render
  2. Sign up or log in with GitHub
2

Create new Web Service

  1. Click New → Web Service
  2. Connect your GitHub repository
  3. Select the repository: PREDICTIVE-MAINTENANCE
3

Configure service settings

Set the following configuration:
FieldValue
Namepredictive-maintenance-backend
RegionOregon (US West) or closest to you
Branchmain
Root Directorybackend
EnvironmentDocker
Dockerfile Pathbackend/Dockerfile
Instance TypeFree (or paid for production)
4

Add environment variables

In the Environment section, add:
ENVIRONMENT=production
PORT=8000
INFLUX_URL=https://us-east-1-1.aws.cloud2.influxdata.com
INFLUX_TOKEN=<your-influxdb-token>
INFLUX_ORG=<your-org-id>
INFLUX_BUCKET=sensor_data
Keep INFLUX_TOKEN secure. Never commit it to version control.
5

Deploy

  1. Click Create Web Service
  2. Render will automatically:
    • Build the Docker image from backend/Dockerfile
    • Deploy the container
    • Assign a URL: https://your-service.onrender.com
  3. Monitor logs for successful startup
6

Verify deployment

Test the health endpoint:
curl https://your-service.onrender.com/health
Expected response:
{
  "status": "healthy",
  "db_connected": true
}

Render Cold Starts

Render free tier services spin down after 15 minutes of inactivity. First request after idle may take 30-60 seconds.
The frontend automatically sends a 10-minute keep-alive ping to /ping to prevent cold starts during active browser sessions.

Frontend Deployment (Vercel)

Deploy the React dashboard to Vercel with automatic HTTPS and global CDN.
1

Create Vercel account

  1. Go to Vercel
  2. Sign up with GitHub
2

Import repository

  1. Click Add New → Project
  2. Import your GitHub repository: PREDICTIVE-MAINTENANCE
3

Configure build settings

Set the following configuration:
FieldValue
Framework PresetVite
Root Directoryfrontend
Build Commandnpm run build
Output Directorydist
Install Commandnpm install
4

Add environment variables (optional)

In Settings → Environment Variables, add:
VITE_API_URL=https://your-backend.onrender.com
This is optional if you’re using Vercel rewrites (recommended approach).
5

Configure API rewrites

The frontend/vercel.json file handles API proxying:
{
  "rewrites": [
    {
      "source": "/api/:path*",
      "destination": "https://predictive-maintenance-uhlb.onrender.com/api/:path*"
    },
    {
      "source": "/system/:path*",
      "destination": "https://predictive-maintenance-uhlb.onrender.com/system/:path*"
    },
    {
      "source": "/health",
      "destination": "https://predictive-maintenance-uhlb.onrender.com/health"
    },
    {
      "source": "/ping",
      "destination": "https://predictive-maintenance-uhlb.onrender.com/ping"
    }
  ]
}
Update the destination URLs to match your Render backend URL.
6

Deploy

  1. Click Deploy
  2. Vercel will:
    • Install dependencies
    • Build the React app
    • Deploy to global CDN
    • Assign a URL: https://your-project.vercel.app
  3. Automatic deployments trigger on every push to main
7

Test the deployment

  1. Open your Vercel URL
  2. Verify the dashboard loads
  3. Check browser DevTools Network tab
  4. Ensure API calls proxy to Render backend

Custom Domain (Optional)

1

Add domain in Vercel

  1. Go to Settings → Domains
  2. Add your custom domain (e.g., pm.yourcompany.com)
2

Update DNS

Add these DNS records at your domain provider:
TypeNameValue
Apm76.76.21.21
CNAMEpmcname.vercel-dns.com
3

Wait for verification

Vercel will automatically provision SSL certificates via Let’s Encrypt.

Systemd Deployment (Linux Servers)

For bare-metal Linux servers, use the systemd service for process management.
1

Run setup script

The automated setup script installs dependencies and configures systemd:
sudo ./scripts/setup_linux.sh
This script:
  • Installs Python 3.11+, pip, venv
  • Creates /opt/predictive-maintenance directory
  • Sets up Python virtual environment
  • Installs backend dependencies
  • Configures systemd service
  • Starts the service automatically
2

Verify service status

sudo systemctl status predictive-maintenance
Expected output:
● predictive-maintenance.service - Predictive Maintenance Backend API
   Loaded: loaded (/etc/systemd/system/predictive-maintenance.service)
   Active: active (running) since Mon 2026-03-02 12:00:00 UTC
3

Manage the service

sudo systemctl start predictive-maintenance

Systemd Service Configuration

The service is defined in /etc/systemd/system/predictive-maintenance.service:
[Unit]
Description=Predictive Maintenance Backend API
After=network.target influxdb.service
Wants=influxdb.service

[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/opt/predictive-maintenance
Environment="PATH=/opt/predictive-maintenance/venv/bin"
Environment="PYTHONPATH=/opt/predictive-maintenance"
ExecStart=/opt/predictive-maintenance/venv/bin/uvicorn backend.api.main:app --host 0.0.0.0 --port 8000
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
Key features:
  • Restart=always - Auto-restart on failure
  • RestartSec=5 - Wait 5 seconds before restart
  • Logs to systemd journal
  • Starts after network is available

Environment Variables Reference

Backend (Render)

VariableRequiredDescriptionExample
ENVIRONMENTYesRuntime modeproduction
PORTYesServer port8000
INFLUX_URLYesInfluxDB Cloud URLhttps://us-east-1-1.aws.cloud2.influxdata.com
INFLUX_TOKENYesInfluxDB API tokenkg2i8Mq...
INFLUX_ORGYesOrganization ID67c4314d97304c09
INFLUX_BUCKETYesBucket namesensor_data

Frontend (Vercel)

VariableRequiredDescriptionExample
VITE_API_URLNoBackend URL (if not using rewrites)https://your-backend.onrender.com

Monitoring and Observability

Health Checks

curl https://your-backend.onrender.com/health

Logs

Render: Access logs via dashboard → Service → Logs Vercel: View logs in dashboard → Project → Functions InfluxDB: Monitor queries at https://cloud2.influxdata.com Systemd: View logs with:
sudo journalctl -u predictive-maintenance -f --since "1 hour ago"

Metrics

InfluxDB Cloud provides built-in metrics:
  • Data points written per second
  • Query response time
  • Storage usage
  • API request rate

Troubleshooting

Backend 503 Errors

Problem: Service unavailable on Render Solution: Check deployment logs for:
  • InfluxDB connection failures
  • Missing environment variables
  • ML model loading errors (lazy imports)
curl https://your-backend.onrender.com/health -v

Frontend API Errors

Problem: Network Error or CORS issues Solution: Verify Vercel rewrites are working:
  1. Check frontend/vercel.json destination URLs
  2. Inspect browser DevTools → Network tab
  3. Ensure requests to /api/* proxy to Render

InfluxDB Token Scope Issues

Problem: unauthorized access errors Solution: Regenerate token with correct permissions:
  1. Go to InfluxDB Cloud → API Tokens
  2. Create new Read/Write Token
  3. Grant access to sensor_data bucket
  4. Update INFLUX_TOKEN in Render environment variables
  5. Restart the service

Degradation Index (DI) State Loss

Problem: Health score resets after backend restart Solution: The system automatically hydrates DI from InfluxDB on startup via |> last(). If this fails:
  1. Check InfluxDB connection
  2. Verify degradation_index measurement exists
  3. Use POST /system/purge to explicitly reset DI to 0.0

Windows Binary Errors on Vercel

Problem: Error 126 during build Solution: Remove node_modules from Git:
git rm -r --cached frontend/node_modules
echo "node_modules/" >> frontend/.gitignore
git commit -m "fix: remove node_modules from tracking"
git push
Redeploy on Vercel.

Performance Optimization

Backend

  1. Enable Gunicorn for production (multi-worker):
    gunicorn backend.api.main:app -w 4 -k uvicorn.workers.UvicornWorker
    
  2. Cache ML models in memory (already implemented)
  3. Batch InfluxDB writes for high-frequency data
  4. Use connection pooling for database clients

Frontend

  1. Enable Vercel Edge Caching for static assets
  2. Lazy load components using React.lazy()
  3. Optimize chart rendering with data sampling
  4. Use service workers for offline capability

Database

  1. Downsample old data using InfluxDB tasks:
    from(bucket: "sensor_data")
      |> range(start: -30d)
      |> aggregateWindow(every: 1h, fn: mean)
      |> to(bucket: "sensor_data_downsampled")
    
  2. Set appropriate retention policies (30 days for raw data)
  3. Use continuous queries for pre-aggregation

Security Best Practices

Always follow these security guidelines in production:
  1. Never commit secrets to version control
    • Use .env files locally
    • Use platform environment variables in production
  2. Rotate API tokens regularly
    • InfluxDB tokens every 90 days
    • Backend service credentials quarterly
  3. Enable CORS restrictions in backend/api/main.py:
    CORS_ORIGINS = [
        "https://your-frontend.vercel.app",
    ]
    
  4. Use HTTPS everywhere
    • Vercel provides automatic SSL
    • Render provides automatic SSL
    • InfluxDB Cloud enforces HTTPS
  5. Implement rate limiting using FastAPI middleware
  6. Validate all inputs with Pydantic schemas
  7. Monitor for anomalous API usage in logs

Scaling Considerations

Horizontal Scaling

Render: Upgrade to paid plan for:
  • Auto-scaling based on CPU/memory
  • Multiple instances with load balancing
  • 99.9% uptime SLA
Vercel: Automatic edge network scaling InfluxDB Cloud: Auto-scales with usage-based pricing

Vertical Scaling

For high-frequency data (>1000 data points/sec):
  1. Upgrade Render instance type (2GB → 4GB RAM)
  2. Use InfluxDB Dedicated instances
  3. Implement data batching in backend

Next Steps

Build docs developers (and LLMs) love