System Architecture
Echoes of the Past is built on a modern, serverless architecture that enables real-time AI-powered voice conversations with historical figures. The platform combines Next.js for the frontend, Supabase for backend services, and multiple AI providers for voice and intelligence capabilities.Tech Stack
Frontend Framework
Server-side rendering, API routes, and static generation with React 19. Uses Turbopack for fast development builds.
Client-side state management and caching for server data with real-time updates.
Smooth UI animations and transitions for enhanced user experience.
Backend Infrastructure
PostgreSQL database, authentication, and real-time subscriptions via @supabase/ssr and @supabase/supabase-js.
Serverless Redis for rate limiting feedback generation (10 requests per day per user).
Server-side business logic using Next.js 15 server actions with ‘use server’ directives.
AI & Voice Services
Real-time voice interaction orchestration via @vapi-ai/web SDK. Handles speech-to-text, text-to-speech, and conversation flow.
High-quality AI voice generation using @elevenlabs/elevenlabs-js for historical figure voice cloning.
- GPT-3.5 Turbo for real-time conversation (via Vapi)
- GPT-4 Turbo for structured feedback generation with JSON mode
Architecture Diagram
The system follows a three-tier architecture:Data Flow
Voice Conversation Flow
- Initiation: User selects a historical figure and starts a conversation
- Assistant Creation: Frontend creates a Vapi assistant with:
- Character-specific system prompt from
lib/prompt.ts - ElevenLabs voice ID from the character’s
voiceIdfield - OpenAI GPT-3.5 Turbo for language processing
- Character-specific system prompt from
- Real-time Communication:
- User speech → Vapi transcription → OpenAI processing → Response generation
- AI response → ElevenLabs synthesis → Audio playback
- Message Tracking: All transcript messages stored in client state for feedback generation
Quiz Flow
- Quiz Generation: Server action creates quiz record in Supabase
- Question Delivery: Vapi assistant asks questions from
quizQuestionstable - Scoring: Assistant tracks correct/incorrect answers in real-time
- Results: Final score displayed with feedback summary
Feedback Generation Flow
- Transcript Collection: Conversation messages accumulated during call
- Rate Limit Check: Redis checks user hasn’t exceeded 10 feedback requests per day
- AI Analysis: OpenAI GPT-4 Turbo analyzes transcript with structured prompt
- Structured Output: JSON response parsed with Zod schema validation
- Storage: Feedback saved to Supabase
feedbackstable with category scores
Authentication & Authorization
- Provider: Supabase Auth with cookie-based sessions
- Client: Browser client created with
createBrowserClientfrom @supabase/ssr - Server: Server client created with
createServerClientusing Next.js cookies - Row Level Security: Database policies enforce user access control
Environment Configuration
Required environment variables:Type Safety
- Database Types: Auto-generated TypeScript types from Supabase schema in
database.types.ts - API Types: Vapi SDK types for assistant configuration and message handling
- Validation: Zod schemas for runtime validation of AI-generated feedback
Performance Optimizations
- Server-Side Rendering: Initial page load with pre-rendered content
- Data Caching: TanStack Query caches frequently accessed data
- Redis Caching: Rate limit counters cached in serverless Redis
- WebRTC: Direct peer-to-peer voice communication via Vapi
- Turbopack: Fast development builds with incremental compilation
Scalability Considerations
- Serverless Architecture: Auto-scaling Next.js API routes on Vercel
- Database Connection Pooling: Supabase handles PostgreSQL connections
- Edge Functions: Potential for edge deployment of API routes
- Rate Limiting: Prevents abuse of expensive AI operations
Related Documentation
- Database Schema - Detailed table structures and relationships
- AI Integration - AI provider implementations and prompt engineering