Skip to main content

Introduction

The SeanceAI API allows you to build applications that enable conversations with historical figures powered by AI. The API provides endpoints for:
  • Retrieving information about available historical figures
  • Having one-on-one conversations with specific figures
  • Creating “dinner party” conversations with multiple figures
  • Getting contextual follow-up question suggestions

Base URL

http://localhost:5000
In production, replace with your deployed application URL.

Authentication

The SeanceAI API uses OpenRouter for AI model access. You need to configure the OPENROUTER_API_KEY environment variable on the server side. No client-side authentication is required for API requests.

Environment Variables

OPENROUTER_API_KEY
string
required
Your OpenRouter API key for accessing AI models

Request Format

All POST endpoints expect JSON payloads with Content-Type: application/json.
curl -X POST http://localhost:5000/api/chat \
  -H "Content-Type: application/json" \
  -d '{"figure_id": "einstein", "message": "Hello!"}'

Response Format

All endpoints return JSON responses. Successful responses have a 200 status code, while errors return appropriate 4xx or 5xx codes.

Success Response

{
  "response": "Ah, greetings! How wonderful to make your acquaintance...",
  "figure": {
    "id": "einstein",
    "name": "Albert Einstein",
    "title": "Theoretical Physicist"
  }
}

Error Response

{
  "error": "Figure not found"
}

Error Codes

400
Bad Request
Missing or invalid request parameters
404
Not Found
Requested resource (figure, endpoint) not found
500
Internal Server Error
Server error, often due to AI model issues or rate limiting

Rate Limiting

The API implements automatic retry logic with exponential backoff and fallback models when rate limits are encountered. If all models are rate-limited, you’ll receive an error message:
{
  "error": "The spirits are overwhelmed with visitors right now. Please wait a moment and try again, or select a different AI model from the dropdown."
}

Available Models

The API supports multiple AI models organized by capability tier:

Swift Tier (Free)

  • google/gemma-3-12b-it:free (default)
  • google/gemma-3-27b-it:free
  • google/gemma-3-4b-it:free
  • meta-llama/llama-3.3-70b-instruct:free
  • meta-llama/llama-3.1-405b-instruct:free

Balanced Tier

  • openai/gpt-4o-mini
  • anthropic/claude-3.5-haiku
  • deepseek/deepseek-chat

Advanced Tier

  • anthropic/claude-sonnet-4
  • openai/gpt-4o
  • google/gemini-2.5-pro-preview
  • anthropic/claude-opus-4
You can retrieve the full list via the /api/models endpoint or specify a model in the request body using the model parameter.

GET /api/models

Returns the list of available AI models and the default model.

Response

models
array
Array of model objects, each containing:
  • id (string): Model identifier for use in API requests
  • name (string): Human-readable model name
  • tier (string): Model tier - “swift”, “balanced”, or “advanced”
default
string
The default model ID used when no model is specified

Example Response

{
  "models": [
    {
      "id": "google/gemma-3-12b-it:free",
      "name": "Gemma 3 12B",
      "tier": "swift"
    },
    {
      "id": "openai/gpt-4o-mini",
      "name": "GPT-4o Mini",
      "tier": "balanced"
    },
    {
      "id": "anthropic/claude-sonnet-4",
      "name": "Claude Sonnet 4",
      "tier": "advanced"
    }
  ],
  "default": "google/gemma-3-12b-it:free"
}

GET /api/health

Health check endpoint for monitoring application status. Useful for uptime monitoring services and deployment health checks.

Response

status
string
Application health status - “healthy” or “degraded”
api_key_configured
boolean
Whether the OPENROUTER_API_KEY environment variable is configured
api_key_length
number
Length of the configured API key (0 if not set)
warning
string
Warning message if status is “degraded” (only present when API key is not configured)

Example Response (Healthy)

{
  "status": "healthy",
  "api_key_configured": true,
  "api_key_length": 64
}

Example Response (Degraded)

{
  "status": "degraded",
  "api_key_configured": false,
  "api_key_length": 0,
  "warning": "OPENROUTER_API_KEY is not set"
}
Use this endpoint for Railway uptime monitoring to keep your free-tier app awake. See the Deployment Guide for details.

Next Steps

Figures

Learn about retrieving historical figure data

Chat

Start conversations with historical figures

Dinner Party

Host conversations with multiple figures

Suggestions

Generate contextual follow-up questions

Build docs developers (and LLMs) love