Overview
SeanceAI is a standard Flask application that can be deployed to any platform supporting Python. This guide covers the recommended platforms and configuration.All deployment platforms require you to set the
OPENROUTER_API_KEY environment variable. See the Configuration guide for details.Railway (Recommended)
Railway is the recommended platform for deploying SeanceAI due to its simplicity and automatic configuration detection.Quick Deploy
Fork the repository
Fork the SeanceAI repository to your GitHub account.
Create a Railway project
- Visit Railway.app and sign in
- Click “New Project”
- Select “Deploy from GitHub repo”
- Choose your forked SeanceAI repository
Configure environment variables
In your Railway project settings:
- Navigate to the “Variables” tab
- Add a new variable:
- Key:
OPENROUTER_API_KEY - Value: Your OpenRouter API key
- Key:
Railway Configuration
SeanceAI includes arailway.json file with optimized settings:
Keeping Your App Awake (Free Tier)
To keep your app responsive for visitors (useful for portfolios):Option A: Free Uptime Monitoring
Use a free uptime monitoring service to ping your health endpoint every 5-10 minutes: URL to monitor:- UptimeRobot - Free, 5-minute checks
- cron-job.org - Free scheduled requests
- Better Uptime - Free tier available
Sign up for UptimeRobot
Create a free account at UptimeRobot.com
Add a new monitor
- Monitor Type: HTTP(S)
- URL:
https://your-app.up.railway.app/api/health - Monitoring Interval: 5 minutes
Option B: Paid Railway Plan
Upgrade to a Railway plan that keeps services running without sleep. Check Railway pricing for always-on options.Fly.io
Fly.io is another excellent option with global edge deployment.Deploy to Fly.io
Install Fly CLI
Follow the Fly.io installation guide for your platform:
Launch the app
From your SeanceAI directory:This will:
- Detect the Flask application
- Use the existing
fly.tomlconfiguration - Create a new Fly.io app
Fly.io Configuration
The includedfly.toml file configures:
Fly.io automatically scales to zero when idle and wakes on requests, similar to Railway’s free tier.
Other Platforms
SeanceAI can be deployed to any platform that supports Python and Flask:Heroku
Install Heroku CLI
Download from Heroku CLI
Procfile will be automatically detected:
Render
- Connect your GitHub repository on Render.com
- Create a new Web Service
- Set the start command:
gunicorn app:app --config gunicorn_config.py - Add environment variable:
OPENROUTER_API_KEY - Deploy
DigitalOcean App Platform
- Create new app from GitHub on DigitalOcean
- DigitalOcean will detect Python and dependencies
- Set environment variable:
OPENROUTER_API_KEY - Deploy
Google Cloud Run
AWS Elastic Beanstalk
- Install the EB CLI
- Initialize and create environment:
- Deploy:
eb deploy
Production Configuration
Gunicorn Settings
The includedgunicorn_config.py is optimized for production:
- Worker Class:
geventfor async streaming support - Workers: 2 processes for handling concurrent requests
- Timeout: 120 seconds for long-running streaming requests
- Logging: Outputs to stdout/stderr for platform log aggregation
Environment Variables
For production, ensure:Monitoring and Maintenance
Health Checks
All platforms should monitor the health endpoint:Logging
Gunicorn logs to stdout/stderr, which most platforms automatically collect:- Railway: View logs in the deployments tab
- Fly.io: Use
fly logs - Heroku: Use
heroku logs --tail
Updating Your Deployment
Most platforms auto-deploy on git push:- Make changes locally
- Commit and push to your GitHub repository
- Platform automatically rebuilds and deploys
- Fly.io:
fly deploy - Heroku:
git push heroku main - Railway: Auto-deploys on push
Troubleshooting
App Won’t Start
- Verify
OPENROUTER_API_KEYis set correctly - Check platform logs for errors
- Ensure Python 3.11+ is specified in platform config
Streaming Not Working
- Verify
geventis installed (pip list | grep gevent) - Check that Gunicorn uses
worker_class = "gevent" - Ensure platform doesn’t buffer streaming responses
API Rate Limits
- SeanceAI automatically retries and falls back to alternative models
- For high traffic, consider upgrading your OpenRouter plan
- Monitor usage at OpenRouter dashboard
Next Steps
- Review Configuration for advanced settings
- Check out the Installation guide for local development
- View the API Documentation