Introduction
The LangShazam API provides real-time spoken language detection through a WebSocket-based service. Send audio data and receive language identification results powered by OpenAI’s Whisper model.Base URL
The production API is deployed on AWS Kubernetes:The frontend automatically connects to this endpoint via the
ServerDiscovery service. See Server Configuration for alternative deployment endpoints.Authentication
No authentication is required for the WebSocket endpoints. The service uses an internal OpenAI API key for processing audio data.Core Endpoints
WebSocket Connection
Real-time audio streaming endpoint at
/wsMetrics
Server health and performance metrics at
/metricsQuick Start
Response Format
All responses follow a standardized JSON format:Response status:
success or errorResult data when status is
successError message when status is
errorISO 8601 timestamp of the response
Unique identifier for the connection (8-character UUID)
Rate Limits
The service supports a maximum of 3 concurrent OpenAI API calls. Additional requests are queued automatically.CORS Configuration
The following origins are allowed:https://www.langshazam.comhttps://langshazam.comhttp://localhost:3000http://localhost:5173http://127.0.0.1:3000http://127.0.0.1:5173

