Skip to main content

Quick Start Guide

Get SeanceAI running locally in just a few steps. You’ll be conversing with historical figures in under 5 minutes!

Prerequisites

Before you begin, make sure you have:
OpenRouter provides access to multiple AI models. The free tier includes models like Gemma 3 and Llama 3.3 that work great with SeanceAI.

Installation Steps

1

Clone the repository

Clone the SeanceAI repository to your local machine:
git clone https://github.com/ARJUNVARMA2000/Seance_AI.git
cd Seance_AI
2

Create a virtual environment

Set up a Python virtual environment to keep dependencies isolated:
python -m venv venv
Then activate it:On Windows:
venv\Scripts\activate
On macOS/Linux:
source venv/bin/activate
3

Install dependencies

Install all required Python packages:
pip install -r requirements.txt
This installs Flask, requests, python-dotenv, and other dependencies needed to run SeanceAI.
4

Configure your API key

Create a .env file in the project root and add your OpenRouter API key:
echo OPENROUTER_API_KEY=your_key_here > .env
Replace your_key_here with your actual OpenRouter API key.
Keep your API key secure! Never commit the .env file to version control.
5

Start the development server

Launch the Flask application:
python app.py
You should see output indicating the server is running:
* Running on http://0.0.0.0:5000
6

Open your browser

Navigate to http://localhost:5000 in your web browser.You’ll see the SeanceAI interface with options to start a Seance or host a Dinner Party!

Your First Conversation

1

Choose a mode

On the home page, select either:
  • Start a Seance - for one-on-one conversations
  • Host a Dinner Party - for multi-figure discussions
2

Select a historical figure

Browse through 60+ historical figures organized by era:
  • Ancient World
  • Renaissance
  • 19th Century
  • Modern Era
Use the search bar to quickly find specific figures.
3

Start conversing

Once you’ve selected a figure, you’ll see:
  • Their portrait and biographical information
  • Starter questions to help you begin
  • A text input for your messages
Type your question and press Enter or click Send!
4

Enjoy authentic responses

The AI will respond in character with:
  • Era-appropriate language and knowledge
  • Personality-specific speaking style
  • Genuine curiosity about modern concepts
Use the suggested follow-up questions or ask anything you’d like!

Configuration Options

SeanceAI supports several environment variables you can set in your .env file:
# Required
OPENROUTER_API_KEY=your_key_here

# Optional
FLASK_DEBUG=true          # Enable debug mode (default: false)
PORT=5000                  # Server port (default: 5000)

Available API Endpoints

Once running, SeanceAI exposes these REST endpoints:
MethodEndpointDescription
GET/Serve main HTML page
GET/api/figuresReturn list of all historical figures
GET/api/figures/<id>Return single figure data
GET/api/modelsList available AI models
GET/api/healthHealth check endpoint
POST/api/chatSend message, receive AI response
POST/api/chat/streamStreaming chat endpoint (SSE)
POST/api/dinner-party/chatMulti-figure conversation
POST/api/suggestionsGet contextual follow-up questions

Example API Request

You can interact with SeanceAI programmatically:
curl -X POST http://localhost:5000/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "figure_id": "einstein",
    "message": "What is your theory of relativity?",
    "history": []
  }'
Response:
{
  "response": "Ah, a fundamental question! My theory of relativity...",
  "figure": {
    "id": "einstein",
    "name": "Albert Einstein",
    "era": "Modern Era"
  }
}

Troubleshooting

  • Check that Python 3.11+ is installed: python --version
  • Ensure all dependencies are installed: pip install -r requirements.txt
  • Verify your virtual environment is activated
  • Verify your .env file exists in the project root
  • Check that your API key is valid at OpenRouter
  • Make sure the key is set correctly: OPENROUTER_API_KEY=your_key
SeanceAI includes automatic fallback handling:
  • If one model is rate-limited, it automatically tries fallback models
  • Switch to a different AI model using the dropdown in the UI
  • Wait a moment before trying again
If port 5000 is in use, specify a different port:
PORT=8080 python app.py

Next Steps

Explore Features

Learn about all the features SeanceAI offers

Try the Live Demo

Experience SeanceAI without installing anything

Build docs developers (and LLMs) love