Skip to main content

Overview

The LangShazam frontend is a React single-page application (SPA) that provides a real-time language detection interface. It leverages modern browser APIs for audio capture and WebSocket communication, delivering a responsive and intuitive user experience.

Application Structure

Main Application Component

Location: frontend/language-detector-ui/src/App.js The root App component orchestrates the entire application, managing state, audio capture, WebSocket communication, and routing.

Component Architecture

The application follows a single-component architecture with modular sub-components for specific UI features.

Component Hierarchy

Core Components

Location: App.js
Lines of Code: 311
The primary container that manages:
  • Application state (listening status, language results, errors)
  • Audio recording lifecycle
  • WebSocket connection management
  • Routing between pages
  • Server discovery integration
Reference: App.js:1-311
Location: components/WaveAnimation.js
Purpose: Animated wave visualization during recording
const WaveAnimation = ({ isRecording }) => {
  if (!isRecording) return null;
  
  return (
    <div className="wave-animation">
      <div className="wave"></div>
      <div className="wave"></div>
      <div className="wave"></div>
      <div className="wave"></div>
      <div className="wave"></div>
    </div>
  );
};
Reference: WaveAnimation.js:1-15
Location: components/Toast.js
Purpose: Displays temporary success/error messages
Features:
  • Auto-dismissal after 3 seconds
  • Success and error variants
  • Smooth show/hide animations
const Toast = ({ message, type = 'success', show, onHide }) => {
  useEffect(() => {
    if (show) {
      const timer = setTimeout(() => {
        onHide();
      }, 3000);
      return () => clearTimeout(timer);
    }
  }, [show, onHide]);
  // ...
};
Reference: Toast.js:1-23
Location: components/MicrophoneLevel.js
Purpose: Real-time visual representation of microphone input level
const MicrophoneLevel = ({ level = 0 }) => {
  return (
    <div className="mic-level">
      <div 
        className="mic-level-fill" 
        style={{ transform: `scaleX(${Math.min(level, 1)})` }}
      />
    </div>
  );
};
Reference: MicrophoneLevel.js:1-12
Location: components/SupportedLanguages.js
Purpose: Expandable panel showing 57+ supported languages with search
Features:
  • 57 supported languages with flag icons
  • Real-time search filtering
  • Grid layout with flags and names
  • Toggle expand/collapse
Reference: SupportedLanguages.js:1-110

State Management

State Architecture

LangShazam uses React hooks for state management without external libraries like Redux. All state is managed within the App component.

State Management Pattern

Approach: Local component state with React hooks (useState, useEffect)
Rationale: Simple, lightweight, sufficient for single-component architecture

State Variables

const [isListening, setIsListening] = useState(false);
const [language, setLanguage] = useState('');
const [error, setError] = useState('');
const [micLevel, setMicLevel] = useState(0);
const [toast, setToast] = useState({ show: false, message: '', type: 'success' });
const [audioBuffer, setAudioBuffer] = useState([]);
const [mediaRecorder, setMediaRecorder] = useState(null);
const [wsConnection, setWsConnection] = useState(null);
const [isRequestingPermission, setIsRequestingPermission] = useState(false);
const [serverUrl, setServerUrl] = useState(null);
const [isConnected, setIsConnected] = useState(false);
const [webSocket, setWebSocket] = useState(null);
Reference: App.js:18-32

State Categories

Recording State

  • isListening - Recording active status
  • micLevel - Current microphone input level
  • isRequestingPermission - Mic permission request status

Result State

  • language - Detected language result
  • error - Error messages
  • toast - Notification state

Connection State

  • serverUrl - Discovered WebSocket URL
  • isConnected - WebSocket connection status
  • wsConnection / webSocket - WebSocket instances

Audio State

  • audioBuffer - Buffered audio chunks
  • mediaRecorder - MediaRecorder instance

Audio Capture Implementation

MediaRecorder API Integration

1

Request Microphone Permission

const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
Reference: App.js:77
2

Initialize Audio Context

const audioContext = new AudioContext();
const source = audioContext.createMediaStreamSource(stream);
const analyser = audioContext.createAnalyser();
source.connect(analyser);
Creates Web Audio API nodes for real-time audio level monitoring.Reference: App.js:81-84
3

Monitor Audio Levels

analyser.fftSize = 256;
const dataArray = new Uint8Array(analyser.frequencyBinCount);

const updateLevel = () => {
  if (!isListening) return;
  analyser.getByteFrequencyData(dataArray);
  const average = dataArray.reduce((a, b) => a + b) / dataArray.length;
  setMicLevel(average / 128);
  requestAnimationFrame(updateLevel);
};
Uses requestAnimationFrame for smooth visual updates.Reference: App.js:87-96
4

Create MediaRecorder

const recorder = new MediaRecorder(stream, {
  mimeType: 'audio/mp4',
  audioBitsPerSecond: 16000
});
Uses MP4 format for wide compatibility including iOS.Reference: App.js:158-161
5

Start Recording

recorder.start(4000); // Collect 4 seconds before sending
Reference: App.js:189

Audio Configuration

const CHUNK_SIZE = 128 * 1024; // 128KB chunks
const MIN_AUDIO_LENGTH = 4000; // 4 seconds minimum
const MAX_AUDIO_LENGTH = 15000; // 15 seconds maximum
Reference: App.js:24-26
The frontend enforces a 15-second maximum recording time to prevent excessive audio data and ensure timely results.

WebSocket Client Implementation

Connection Lifecycle

Server Discovery

Before establishing WebSocket connection, the app discovers the server URL using the ServerDiscovery service.
Server Discovery (App.js:34-42):
useEffect(() => {
  const discoverServer = async () => {
    const url = await ServerDiscovery.discoverServer();
    console.log("🔍 Discovered server:", url);
    setServerUrl(url);
  };
  
  discoverServer();
}, []);

WebSocket Connection

const ws = new WebSocket(serverUrl);

ws.onopen = () => {
  console.log("WebSocket connection established successfully");
  setIsConnected(true);
};
Reference: App.js:111-116
recorder.ondataavailable = async (event) => {
  if (event.data.size > 0) {
    console.log("Received audio chunk of size:", event.data.size);
    ws.send(event.data);
  }
};
Audio chunks are sent as binary blobs directly through the WebSocket.Reference: App.js:167-172
ws.onmessage = (event) => {
  const response = JSON.parse(event.data);
  if (response.status === 'success') {
    setLanguage(response.data.language);
    showToast(`Language detected: ${response.data.language}`, 'success');
  } else {
    setError(response.message);
    showToast(response.message, 'error');
  }
  setIsListening(false);
  stream.getTracks().forEach(track => track.stop());
  ws.close();
};
Reference: App.js:174-187
ws.onerror = (event) => {
  console.error('WebSocket error details:', {
    readyState: ws.readyState,
    url: ws.url,
    timestamp: new Date().toISOString(),
    protocol: window.location.protocol,
    hostname: window.location.hostname,
    // ... extensive error details
  });
  setIsConnected(false);
  setError('Connection error occurred');
  showToast('Connection error occurred', 'error');
};
Comprehensive error logging for debugging connection issues.Reference: App.js:131-153
ws.onclose = (event) => {
  console.log("WebSocket connection closed:", {
    code: event.code,
    reason: event.reason,
    wasClean: event.wasClean,
    timestamp: new Date().toISOString(),
  });
  setIsConnected(false);
};
Reference: App.js:118-129

UI/UX Patterns

User Interaction Flow

Visual Feedback Components

Wave Animation

5 animated wave elements displayed during recordingTrigger: isListening === true

Microphone Level

Real-time visual bar showing audio input levelUpdate Rate: 60fps via requestAnimationFrame

Toast Notifications

Auto-dismissing notifications for success/errorDuration: 3 seconds

Status Indicator

Text prompts guiding user through recording processStates: Recording, Processing, Result, Error

Button States

<button 
  onClick={startListening} 
  disabled={isListening || isRequestingPermission}
  className="start-button"
>
  {isListening ? (
    <span className="listening-animation"></span>
  ) : isRequestingPermission ? (
    <>⏳ Requesting microphone access...</>
  ) : (
    <>🎙️ Start Detection</>
  )}
</button>
Reference: App.js:221-236

Routing Architecture

React Router Integration

Location: App.js:297-307
<Router>
  <Routes>
    <Route path="/" element={<MainContent />} />
    <Route path="/privacy" element={<Privacy />} />
    <Route path="/terms" element={<Terms />} />
    <Route path="/about" element={<About />} />
    <Route path="/contact" element={<Contact />} />
  </Routes>
</Router>

Navigation Pattern

The footer contains links to legal and informational pages:
<footer className="footer">
  <div className="footer-links">
    <Link to="/privacy">Privacy Policy</Link>
    <Link to="/terms">Terms of Service</Link>
    <Link to="/about">About</Link>
    <Link to="/contact">Contact</Link>
  </div>
</footer>
Reference: App.js:281-293

Service Layer

ServerDiscovery Service

Location: services/ServerDiscovery.js

Purpose

Abstracts server endpoint discovery, allowing for dynamic backend URL configuration.
const AWS_ENDPOINT = "wss://3.149.10.154.nip.io/ws";

class ServerDiscovery {
  static async discoverServer() {
    console.log('Using AWS Kubernetes server:', AWS_ENDPOINT);
    return AWS_ENDPOINT;
  }
}
Reference: ServerDiscovery.js:1-11
The service currently returns a static AWS Kubernetes endpoint but provides a foundation for implementing dynamic service discovery in the future.

Event Handling

Key Event Handlers

Purpose: Initiates audio recording and WebSocket connectionFlow:
  1. Validate server URL
  2. Request microphone permission
  3. Initialize audio context and analyser
  4. Establish WebSocket connection
  5. Create and start MediaRecorder
  6. Set up event handlers
Reference: App.js:68-211
Purpose: Stops recording and sends final audio data
const stopListening = () => {
  if (mediaRecorder && mediaRecorder.state === 'recording') {
    mediaRecorder.stop();
    if (audioBuffer.length > 0) {
      const audioBlob = new Blob(audioBuffer, { type: 'audio/webm' });
      wsConnection.send(audioBlob);
    }
    setIsListening(false);
    mediaRecorder.stream.getTracks().forEach(track => track.stop());
    wsConnection.close();
  }
};
Reference: App.js:52-66
Purpose: Display notification to user
const showToast = (message, type = 'success') => {
  setToast({ show: true, message, type });
};
Reference: App.js:44-46

Browser API Usage

MediaDevices API

navigator.mediaDevices.getUserMedia()Requests access to user’s microphone

MediaRecorder API

new MediaRecorder(stream, options)Records audio stream to MP4 format

Web Audio API

AudioContext, AnalyserNodeReal-time audio analysis and visualization

WebSocket API

new WebSocket(url)Bidirectional real-time communication

Performance Optimizations

Optimization Strategies

The frontend implements several performance optimizations:
const updateLevel = () => {
  if (!isListening) return;
  analyser.getByteFrequencyData(dataArray);
  const average = dataArray.reduce((a, b) => a + b) / dataArray.length;
  setMicLevel(average / 128);
  requestAnimationFrame(updateLevel);
};
Uses browser’s animation frame for optimal 60fps updates.Reference: App.js:90-96
recorder.start(4000); // Collect 4 seconds before sending
Reduces network overhead by batching audio data.Reference: App.js:189
Components only render when needed:
{isListening && (
  <>
    <WaveAnimation isRecording={isListening} />
    <MicrophoneLevel level={micLevel} />
  </>
)}
Minimizes unnecessary DOM updates.
return () => {
  clearTimeout(recordingTimeout);
  stopListening();
};
Prevents memory leaks by cleaning up resources.Reference: App.js:199-202

Dependencies

From package.json:5-14:
PackageVersionPurpose
react18.2.0Core UI framework
react-dom18.2.0DOM rendering
react-router-dom6.22.3Client-side routing
country-flag-icons1.5.18Flag icons for languages
react-scripts5.0.1Build tooling (CRA)
web-vitals2.1.4Performance monitoring

Error Handling Strategy

1

Permission Errors

catch (err) {
  console.error('Error:', err);
  setError(err.message);
  showToast(err.message, 'error');
  setIsListening(false);
  setIsRequestingPermission(false);
}
Reference: App.js:204-210
2

WebSocket Errors

Comprehensive logging with connection details for debuggingReference: App.js:131-153
3

Processing Errors

Server-side errors are displayed via toast notificationsReference: App.js:181-183

Architecture Overview

High-level system architecture and technology stack

Backend Architecture

FastAPI backend implementation details

Build docs developers (and LLMs) love