Skip to main content

Overview

SeanceAI is a standard Flask application that can be deployed to any platform supporting Python. This guide covers the recommended platforms and configuration.
All deployment platforms require you to set the OPENROUTER_API_KEY environment variable. See the Configuration guide for details.
Railway is the recommended platform for deploying SeanceAI due to its simplicity and automatic configuration detection.

Quick Deploy

1

Fork the repository

Fork the SeanceAI repository to your GitHub account.
2

Create a Railway project

  1. Visit Railway.app and sign in
  2. Click “New Project”
  3. Select “Deploy from GitHub repo”
  4. Choose your forked SeanceAI repository
3

Configure environment variables

In your Railway project settings:
  1. Navigate to the “Variables” tab
  2. Add a new variable:
    • Key: OPENROUTER_API_KEY
    • Value: Your OpenRouter API key
4

Deploy

Railway will automatically:
  • Detect the Flask application
  • Use the railway.json configuration
  • Install dependencies from requirements.txt
  • Start the app with Gunicorn using the Procfile
Your app will be live at https://your-app.up.railway.app

Railway Configuration

SeanceAI includes a railway.json file with optimized settings:
{
  "build": {
    "builder": "NIXPACKS"
  },
  "deploy": {
    "startCommand": "gunicorn app:app --config gunicorn_config.py",
    "restartPolicyType": "ON_FAILURE",
    "restartPolicyMaxRetries": 10
  }
}

Keeping Your App Awake (Free Tier)

On Railway’s Hobby (free) tier, services sleep after ~5 minutes of inactivity. Cold starts can take 30 seconds to 2 minutes.
To keep your app responsive for visitors (useful for portfolios):

Option A: Free Uptime Monitoring

Use a free uptime monitoring service to ping your health endpoint every 5-10 minutes: URL to monitor:
https://your-app.up.railway.app/api/health
Recommended services:
1

Sign up for UptimeRobot

Create a free account at UptimeRobot.com
2

Add a new monitor

  • Monitor Type: HTTP(S)
  • URL: https://your-app.up.railway.app/api/health
  • Monitoring Interval: 5 minutes
3

Save and activate

The monitor will now keep your app awake by sending requests every 5 minutes.

Option B: Paid Railway Plan

Upgrade to a Railway plan that keeps services running without sleep. Check Railway pricing for always-on options.

Fly.io

Fly.io is another excellent option with global edge deployment.

Deploy to Fly.io

1

Install Fly CLI

Follow the Fly.io installation guide for your platform:
brew install flyctl
2

Authenticate with Fly

fly auth login
3

Launch the app

From your SeanceAI directory:
fly launch
This will:
  • Detect the Flask application
  • Use the existing fly.toml configuration
  • Create a new Fly.io app
4

Set environment variables

Set your OpenRouter API key as a secret:
fly secrets set OPENROUTER_API_KEY=your_key_here
5

Deploy

Deploy your application:
fly deploy
Your app will be live at https://your-app.fly.dev

Fly.io Configuration

The included fly.toml file configures:
app = "seanceai"
primary_region = "iad"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0

[[vm]]
  cpu_kind = "shared"
  cpus = 1
  memory_mb = 256
Fly.io automatically scales to zero when idle and wakes on requests, similar to Railway’s free tier.

Other Platforms

SeanceAI can be deployed to any platform that supports Python and Flask:

Heroku

1

Install Heroku CLI

Download from Heroku CLI
2

Create app and deploy

heroku create your-app-name
heroku config:set OPENROUTER_API_KEY=your_key_here
git push heroku main
The included Procfile will be automatically detected:
web: gunicorn app:app --config gunicorn_config.py

Render

  1. Connect your GitHub repository on Render.com
  2. Create a new Web Service
  3. Set the start command: gunicorn app:app --config gunicorn_config.py
  4. Add environment variable: OPENROUTER_API_KEY
  5. Deploy

DigitalOcean App Platform

  1. Create new app from GitHub on DigitalOcean
  2. DigitalOcean will detect Python and dependencies
  3. Set environment variable: OPENROUTER_API_KEY
  4. Deploy

Google Cloud Run

1

Create Dockerfile

Add a Dockerfile to your project:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn app:app --config gunicorn_config.py
2

Deploy to Cloud Run

gcloud run deploy seanceai \
  --source . \
  --set-env-vars OPENROUTER_API_KEY=your_key_here \
  --allow-unauthenticated

AWS Elastic Beanstalk

  1. Install the EB CLI
  2. Initialize and create environment:
    eb init -p python-3.11 seanceai
    eb create seanceai-env
    eb setenv OPENROUTER_API_KEY=your_key_here
    
  3. Deploy: eb deploy

Production Configuration

Gunicorn Settings

The included gunicorn_config.py is optimized for production:
  • Worker Class: gevent for async streaming support
  • Workers: 2 processes for handling concurrent requests
  • Timeout: 120 seconds for long-running streaming requests
  • Logging: Outputs to stdout/stderr for platform log aggregation

Environment Variables

For production, ensure:
# Required
OPENROUTER_API_KEY=your_key_here

# Auto-configured by most platforms
PORT=8080

# Must be false in production
FLASK_DEBUG=false
Never enable FLASK_DEBUG=true in production. It exposes sensitive information and is a security risk.

Monitoring and Maintenance

Health Checks

All platforms should monitor the health endpoint:
GET /api/health
Example response:
{
  "status": "healthy",
  "api_key_configured": true,
  "api_key_length": 64
}

Logging

Gunicorn logs to stdout/stderr, which most platforms automatically collect:
  • Railway: View logs in the deployments tab
  • Fly.io: Use fly logs
  • Heroku: Use heroku logs --tail

Updating Your Deployment

Most platforms auto-deploy on git push:
  1. Make changes locally
  2. Commit and push to your GitHub repository
  3. Platform automatically rebuilds and deploys
For manual deployments:
  • Fly.io: fly deploy
  • Heroku: git push heroku main
  • Railway: Auto-deploys on push

Troubleshooting

App Won’t Start

  • Verify OPENROUTER_API_KEY is set correctly
  • Check platform logs for errors
  • Ensure Python 3.11+ is specified in platform config

Streaming Not Working

  • Verify gevent is installed (pip list | grep gevent)
  • Check that Gunicorn uses worker_class = "gevent"
  • Ensure platform doesn’t buffer streaming responses

API Rate Limits

  • SeanceAI automatically retries and falls back to alternative models
  • For high traffic, consider upgrading your OpenRouter plan
  • Monitor usage at OpenRouter dashboard

Next Steps

Build docs developers (and LLMs) love