React Voice Visualizer uses the MediaRecorder API combined with the MediaStream API to capture audio from the user’s microphone. This page explains the recording lifecycle, state management, and error handling.
When startRecording() is called, the library requests microphone access:
// From useVoiceVisualizer.tsx:151-188const getUserMedia = () => { setIsProcessingStartRecording(true); navigator.mediaDevices .getUserMedia({ audio: true }) .then((stream) => { setIsCleared(false); setIsProcessingStartRecording(false); setIsRecordingInProgress(true); setPrevTime(performance.now()); setAudioStream(stream); // Create Web Audio API nodes audioContextRef.current = new window.AudioContext(); analyserRef.current = audioContextRef.current.createAnalyser(); dataArrayRef.current = new Uint8Array( analyserRef.current.frequencyBinCount ); sourceRef.current = audioContextRef.current.createMediaStreamSource(stream); sourceRef.current.connect(analyserRef.current); // Start MediaRecorder mediaRecorderRef.current = new MediaRecorder(stream); mediaRecorderRef.current.addEventListener('dataavailable', handleDataAvailable); mediaRecorderRef.current.start(); if (onStartRecording) onStartRecording(); recordingFrame(); }) .catch((error) => { setIsProcessingStartRecording(false); setError(error instanceof Error ? error : new Error('Error starting audio recording')); });};
1
Request Permission
getUserMedia({ audio: true }) prompts the user for microphone access
2
Initialize Audio Processing
Creates AudioContext, AnalyserNode, and connects the stream for real-time visualization
3
Start Recording
Initializes MediaRecorder and begins capturing audio chunks
4
Begin Visualization
Calls recordingFrame() to start the animation loop
The isProcessingStartRecording state is true between when the user clicks “Record” and when recording actually begins. This is useful for showing a loading indicator while waiting for permissions.
This uses requestAnimationFrame for smooth, synchronized updates at ~60fps. The getByteTimeDomainData method fills the Uint8Array with time-domain audio samples ranging from 0-255.
See Visualization for details on how this data is rendered.
The togglePauseResume() function handles both pause and resume:
// From useVoiceVisualizer.tsx:340-356const togglePauseResume = () => { if (isRecordingInProgress) { setIsPausedRecording((prevPaused) => !prevPaused); if (mediaRecorderRef.current?.state === 'recording') { // Pause mediaRecorderRef.current?.pause(); setRecordingTime((prev) => prev + (performance.now() - prevTime)); if (rafRecordingRef.current) { cancelAnimationFrame(rafRecordingRef.current); } if (onPausedRecording) onPausedRecording(); } else { // Resume rafRecordingRef.current = requestAnimationFrame(recordingFrame); mediaRecorderRef.current?.resume(); setPrevTime(performance.now()); if (onResumedRecording) onResumedRecording(); } return; } // ... playback logic};
When paused, the recordingFrame animation loop is stopped to freeze the visualization, but the MediaRecorder continues to hold the audio stream in memory.
The hook exposes several state values for UI integration:
const { isRecordingInProgress, // true when actively recording isPausedRecording, // true when recording is paused isProcessingStartRecording, // true while waiting for permissions recordingTime, // milliseconds elapsed formattedRecordingTime, // formatted as "MM:SS" recordedBlob, // the recorded audio Blob error, // any error that occurred} = useVoiceVisualizer();