Overview
OpenTogetherTube uses a hybrid architecture combining TypeScript/JavaScript for the web application with Rust for high-performance load balancing. The system is designed for horizontal scalability and real-time synchronization.Major Components
There are three major components:- The Monolith - Node.js/Express server written in TypeScript
- The Client - Vue 3 web application written in TypeScript
- The Balancer - Rust-based load balancer (not currently deployed in production)
Monorepo Structure
The project uses a monorepo architecture with Yarn workspaces and Cargo:Room Management Architecture
To support horizontal scaling, OpenTogetherTube separates client connection management from room state management.Component Interaction
Key Components
Client Manager- Manages all WebSocket connections from clients
- Relays messages between clients and rooms
- Broadcasts room events to subscribed clients
- Manages room lifecycle and state
- Coordinates with the Balancer to prevent duplicate room instances across different Monoliths
- Ensures room state consistency
- Individual room instance that maintains its own state
- Emits events for state changes (playback, queue updates, etc.)
- Handles room-specific business logic
Client-Room Communication Flow
- Client connects via WebSocket to the Client Manager
- Client requests to join a room
- Client Manager subscribes the client to room events
- Room emits state change events
- Client Manager receives events and broadcasts to subscribed clients
Info Extractor Pipeline
The Info Extractor is responsible for extracting and caching media metadata from various video services.High-Level Pipeline
Service Adapters
Each video service has its own adapter that inherits from theServiceAdapter base class. Adapters are located in server/services/.
Supported Services:
- YouTube (and Invidious)
- Vimeo
- Google Drive
- PeerTube
- Odysee
- PlutoTV
- Tubi
- Direct video URLs
- HLS/DASH streams
Metadata Collection
The pipeline collects comprehensive metadata including:- Title
- Description
- Thumbnail
- Duration
- Service-specific metadata
Caching Strategy
Video Metadata Caching:- Stored in the
CachedVideosdatabase table - Cache lifetime: 30 days
- Automatically refreshed when stale
- Stored in Redis with query as the key
- Cache lifetime: 24 hours
- Not cached for playlists
- Not cached to prevent storage bloat
Cache Invalidation
When cache entries expire, the pipeline automatically refreshes the metadata on the next request, ensuring users always receive up-to-date information.Load Balancer (Rust)
The load balancer is a high-performance Rust application designed to route WebSocket connections and HTTP requests across multiple Monolith instances.Architecture
Components:ott-balancer- Core balancer library implementing routing logicott-balancer-bin- Executable binary for deploymentott-balancer-protocol- Shared protocol definitions between Balancer and Monolithsott-collector- Metrics collection service for monitoringott-common- Shared Rust utilities and typesharness- Integration test harness for load balancer testing
Key Features
Room Affinity:- Ensures all connections to the same room route to the same Monolith
- Prevents duplicate room state across servers
- Uses consistent hashing for room assignment
- Low-latency WebSocket connection forwarding
- Connection state tracking
- Automatic reconnection handling
- Intelligent routing based on request type
- Support for health checks and metrics endpoints
The load balancer is not currently deployed in production but is fully functional for local development and testing.
Technology Stack
Frontend (Client)
- Framework: Vue 3 with Composition API
- Build Tool: Vite
- UI Library: Vuetify 3
- State Management: Vuex 4
- Video Players: Plyr, HLS.js, dash.js
- Language: TypeScript
Backend (Server)
- Runtime: Node.js 20-22
- Framework: Express.js
- WebSocket: ws library
- Database ORM: Sequelize
- Caching: Redis
- Authentication: Passport.js
- Validation: Zod
- Language: TypeScript
Load Balancer
- Language: Rust (2021 edition)
- HTTP: Hyper + Tokio
- WebSocket: Tokio-Tungstenite
- Configuration: Figment (TOML)
- Metrics: Prometheus
- Error Handling: anyhow + thiserror
Monitoring
- Metrics: Prometheus
- Visualization: Grafana with custom plugins
- Plugins: React + D3.js
Data Flow
Real-Time Synchronization
- User performs action (play, pause, seek)
- Client sends WebSocket message to Monolith
- Monolith validates and processes the action
- Room state updates
- Room emits event to Client Manager
- Client Manager broadcasts to all connected clients in the room
- Clients update their local state and video players
Video Addition Flow
- User submits a video URL or search query
- Request sent to Info Extractor
- Info Extractor checks cache (Redis/Database)
- If cache miss, appropriate Service Adapter is invoked
- Metadata retrieved and cached
- Video added to room queue
- Queue update event broadcast to all clients
Security Considerations
- Authentication: Session-based with Redis storage
- API Rate Limiting: Implemented using rate-limiter-flexible
- Input Validation: Zod schemas for runtime validation
- CORS: Configured for allowed origins
- Password Hashing: Argon2 algorithm
Scalability
Horizontal Scaling
- Multiple Monolith instances behind load balancer
- Room affinity ensures consistent state
- Redis session sharing across instances
Database
- PostgreSQL for production (supports concurrent connections)
- SQLite for development and testing
- Sequelize migrations for schema management
Caching Strategy
- Redis for session storage and search result caching
- Database for persistent video metadata caching
- Two-tier cache minimizes external API calls
Future Architecture Improvements
- Deploy Rust load balancer to production
- Implement distributed room state using Redis pub/sub
- Microservices architecture for info extraction
- GraphQL API for more efficient client-server communication