{ "response": "Ah, what a wonderful question! You know, it all started with a thought experiment when I was just a young patent clerk in Bern. I imagined myself riding alongside a beam of light, traveling at the speed of light itself. What would I see? Would the light appear frozen, stationary beside me?\n\nThis simple question troubled me deeply, and it made me realize that our notions of absolute time and space couldn't be correct. The breakthrough came when I understood that time itself is relative - it flows differently depending on how fast you're moving! The faster you go, the slower time passes for you relative to someone standing still.\n\nBut tell me, what draws you to relativity? Is it the physics that fascinates you, or perhaps the philosophical implications?", "figure": { "id": "einstein", "name": "Albert Einstein", "title": "Theoretical Physicist", "birth_year": 1879, "death_year": 1955, "era": "Modern Era", "tagline": "Father of Relativity, Nobel Laureate" }}
{ "error": "The spirits are overwhelmed with visitors right now. Please wait a moment and try again, or select a different AI model from the dropdown.", "figure": { /* figure object */ }}
curl -X POST http://localhost:5000/api/chat/stream \ -H "Content-Type: application/json" \ -d '{ "figure_id": "einstein", "message": "Tell me about your theory of relativity" }'
data: {"error": "The spirits are overwhelmed with visitors. Please wait a moment and try again, or select a different AI model.", "rate_limited": true}
Conversation History: The API maintains context by accepting a history array. The system automatically limits history to the last 20 messages to manage token usage.
System Prompts: Each figure has a unique system prompt that defines their personality, beliefs, speaking style, and historical context. The prompt ensures the AI stays in character.
Model Selection: You can override the default model by specifying the model parameter. See the API Overview for available models.
Retry Logic: The API automatically retries failed requests with exponential backoff and falls back to alternative models when rate limits are hit.
Streaming Benefits: Use /api/chat/stream for better user experience with faster perceived response times, especially for longer responses.