Quick Start Guide
Get SeanceAI running locally in just a few steps. You’ll be conversing with historical figures in under 5 minutes!Prerequisites
Before you begin, make sure you have:- Python 3.11 or higher installed
- An OpenRouter API key (Get one free here)
OpenRouter provides access to multiple AI models. The free tier includes models like Gemma 3 and Llama 3.3 that work great with SeanceAI.
Installation Steps
Create a virtual environment
Set up a Python virtual environment to keep dependencies isolated:Then activate it:On Windows:On macOS/Linux:
Install dependencies
Install all required Python packages:This installs Flask, requests, python-dotenv, and other dependencies needed to run SeanceAI.
Configure your API key
Create a Replace
.env file in the project root and add your OpenRouter API key:your_key_here with your actual OpenRouter API key.Start the development server
Launch the Flask application:You should see output indicating the server is running:
Your First Conversation
Choose a mode
On the home page, select either:
- Start a Seance - for one-on-one conversations
- Host a Dinner Party - for multi-figure discussions
Select a historical figure
Browse through 60+ historical figures organized by era:
- Ancient World
- Renaissance
- 19th Century
- Modern Era
Start conversing
Once you’ve selected a figure, you’ll see:
- Their portrait and biographical information
- Starter questions to help you begin
- A text input for your messages
Configuration Options
SeanceAI supports several environment variables you can set in your.env file:
Available API Endpoints
Once running, SeanceAI exposes these REST endpoints:| Method | Endpoint | Description |
|---|---|---|
GET | / | Serve main HTML page |
GET | /api/figures | Return list of all historical figures |
GET | /api/figures/<id> | Return single figure data |
GET | /api/models | List available AI models |
GET | /api/health | Health check endpoint |
POST | /api/chat | Send message, receive AI response |
POST | /api/chat/stream | Streaming chat endpoint (SSE) |
POST | /api/dinner-party/chat | Multi-figure conversation |
POST | /api/suggestions | Get contextual follow-up questions |
Example API Request
You can interact with SeanceAI programmatically:Troubleshooting
Server won't start
Server won't start
- Check that Python 3.11+ is installed:
python --version - Ensure all dependencies are installed:
pip install -r requirements.txt - Verify your virtual environment is activated
API key errors
API key errors
- Verify your
.envfile exists in the project root - Check that your API key is valid at OpenRouter
- Make sure the key is set correctly:
OPENROUTER_API_KEY=your_key
Rate limit errors
Rate limit errors
SeanceAI includes automatic fallback handling:
- If one model is rate-limited, it automatically tries fallback models
- Switch to a different AI model using the dropdown in the UI
- Wait a moment before trying again
Port already in use
Port already in use
If port 5000 is in use, specify a different port:
Next Steps
Explore Features
Learn about all the features SeanceAI offers
Try the Live Demo
Experience SeanceAI without installing anything