Skip to main content

Overview

React Voice Visualizer uses the MediaRecorder API combined with the MediaStream API to capture audio from the user’s microphone. This page explains the recording lifecycle, state management, and error handling.

Recording Lifecycle

1. Starting a Recording

When startRecording() is called, the library requests microphone access:
// From useVoiceVisualizer.tsx:151-188
const getUserMedia = () => {
  setIsProcessingStartRecording(true);

  navigator.mediaDevices
    .getUserMedia({ audio: true })
    .then((stream) => {
      setIsCleared(false);
      setIsProcessingStartRecording(false);
      setIsRecordingInProgress(true);
      setPrevTime(performance.now());
      setAudioStream(stream);
      
      // Create Web Audio API nodes
      audioContextRef.current = new window.AudioContext();
      analyserRef.current = audioContextRef.current.createAnalyser();
      dataArrayRef.current = new Uint8Array(
        analyserRef.current.frequencyBinCount
      );
      sourceRef.current = audioContextRef.current.createMediaStreamSource(stream);
      sourceRef.current.connect(analyserRef.current);
      
      // Start MediaRecorder
      mediaRecorderRef.current = new MediaRecorder(stream);
      mediaRecorderRef.current.addEventListener('dataavailable', handleDataAvailable);
      mediaRecorderRef.current.start();
      
      if (onStartRecording) onStartRecording();
      recordingFrame();
    })
    .catch((error) => {
      setIsProcessingStartRecording(false);
      setError(error instanceof Error ? error : new Error('Error starting audio recording'));
    });
};
1

Request Permission

getUserMedia({ audio: true }) prompts the user for microphone access
2

Initialize Audio Processing

Creates AudioContext, AnalyserNode, and connects the stream for real-time visualization
3

Start Recording

Initializes MediaRecorder and begins capturing audio chunks
4

Begin Visualization

Calls recordingFrame() to start the animation loop
The isProcessingStartRecording state is true between when the user clicks “Record” and when recording actually begins. This is useful for showing a loading indicator while waiting for permissions.

2. Recording States

The hook manages three distinct recording states:
StateDescriptionHook Property
IdleNot recording, no active sessionisRecordingInProgress: false
RecordingActively capturing audioisRecordingInProgress: true
isPausedRecording: false
PausedRecording paused, can be resumedisRecordingInProgress: true
isPausedRecording: true
// State declarations (useVoiceVisualizer.tsx:24-25)
const [isRecordingInProgress, setIsRecordingInProgress] = useState(false);
const [isPausedRecording, setIsPausedRecording] = useState(false);

3. Real-time Data Capture

While recording, the recordingFrame() function continuously extracts audio data:
// From useVoiceVisualizer.tsx:190-194
const recordingFrame = () => {
  analyserRef.current!.getByteTimeDomainData(dataArrayRef.current!);
  setAudioData(new Uint8Array(dataArrayRef.current!));
  rafRecordingRef.current = requestAnimationFrame(recordingFrame);
};
This uses requestAnimationFrame for smooth, synchronized updates at ~60fps. The getByteTimeDomainData method fills the Uint8Array with time-domain audio samples ranging from 0-255.
See Visualization for details on how this data is rendered.

Pause and Resume

The togglePauseResume() function handles both pause and resume:
// From useVoiceVisualizer.tsx:340-356
const togglePauseResume = () => {
  if (isRecordingInProgress) {
    setIsPausedRecording((prevPaused) => !prevPaused);
    
    if (mediaRecorderRef.current?.state === 'recording') {
      // Pause
      mediaRecorderRef.current?.pause();
      setRecordingTime((prev) => prev + (performance.now() - prevTime));
      if (rafRecordingRef.current) {
        cancelAnimationFrame(rafRecordingRef.current);
      }
      if (onPausedRecording) onPausedRecording();
    } else {
      // Resume
      rafRecordingRef.current = requestAnimationFrame(recordingFrame);
      mediaRecorderRef.current?.resume();
      setPrevTime(performance.now());
      if (onResumedRecording) onResumedRecording();
    }
    return;
  }
  // ... playback logic
};
When paused, the recordingFrame animation loop is stopped to freeze the visualization, but the MediaRecorder continues to hold the audio stream in memory.

Recording Time Tracking

The hook tracks elapsed recording time with millisecond precision:
// From useVoiceVisualizer.tsx:65-77
useEffect(() => {
  if (!isRecordingInProgress || isPausedRecording) return;

  const updateTimer = () => {
    const timeNow = performance.now();
    setRecordingTime((prev) => prev + (timeNow - prevTime));
    setPrevTime(timeNow);
  };

  const interval = setInterval(updateTimer, 1000);

  return () => clearInterval(interval);
}, [prevTime, isPausedRecording, isRecordingInProgress]);
The recordingTime state (in milliseconds) is automatically formatted:
// From useVoiceVisualizer.tsx:59
const formattedRecordingTime = formatRecordingTime(recordingTime); // e.g., "02:34"

Stopping a Recording

When stopRecording() is called, the library performs cleanup and processes the audio:
// From useVoiceVisualizer.tsx:220-241
const stopRecording = () => {
  if (!isRecordingInProgress) return;

  setIsRecordingInProgress(false);
  
  // Stop MediaRecorder - triggers 'dataavailable' event
  if (mediaRecorderRef.current) {
    mediaRecorderRef.current.stop();
    mediaRecorderRef.current.removeEventListener('dataavailable', handleDataAvailable);
  }
  
  // Stop all audio tracks
  audioStream?.getTracks().forEach((track) => track.stop());
  
  // Cancel animation frame
  if (rafRecordingRef.current) cancelAnimationFrame(rafRecordingRef.current);
  
  // Disconnect Web Audio nodes
  if (sourceRef.current) sourceRef.current.disconnect();
  if (audioContextRef.current && audioContextRef.current.state !== 'closed') {
    void audioContextRef.current.close();
  }
  
  _setIsProcessingAudioOnComplete(true);
  setRecordingTime(0);
  setIsPausedRecording(false);
  if (onStopRecording) onStopRecording();
};

Blob Generation

When mediaRecorder.stop() is called, it triggers the dataavailable event:
// From useVoiceVisualizer.tsx:196-203
const handleDataAvailable = (event: BlobEvent) => {
  if (!mediaRecorderRef.current) return;

  mediaRecorderRef.current = null;
  audioRef.current = new Audio();
  setRecordedBlob(event.data);
  void processBlob(event.data);
};
The processBlob function converts the Blob to an AudioBuffer:
// From useVoiceVisualizer.tsx:109-134
const processBlob = async (blob: Blob) => {
  if (!blob) return;

  try {
    if (blob.size === 0) {
      throw new Error('Error: The audio blob is empty');
    }
    
    // Create URL for <audio> playback
    const audioSrcFromBlob = URL.createObjectURL(blob);
    setAudioSrc(audioSrcFromBlob);

    // Decode to AudioBuffer for waveform generation
    const audioBuffer = await blob.arrayBuffer();
    const audioContext = new AudioContext();
    const buffer = await audioContext.decodeAudioData(audioBuffer);
    setBufferFromRecordedBlob(buffer);
    setDuration(buffer.duration - 0.06);

    setError(null);
  } catch (error) {
    console.error('Error processing the audio blob:', error);
    setError(
      error instanceof Error
        ? error
        : new Error('Error processing the audio blob')
    );
  }
};
The duration is adjusted by -0.06 seconds to account for a small padding that appears in decoded audio buffers (useVoiceVisualizer.tsx:123).

Error Handling

The library handles several error scenarios:

Permission Denied

If the user denies microphone access:
.catch((error) => {
  setIsProcessingStartRecording(false);
  setError(error instanceof Error ? error : new Error('Error starting audio recording'));
});
Access the error via:
const { error } = useVoiceVisualizer();

useEffect(() => {
  if (error) {
    console.error('Recording error:', error);
    // Show user-friendly message
  }
}, [error]);

Empty Blob

If the recording produces no audio data:
if (blob.size === 0) {
  throw new Error('Error: The audio blob is empty');
}

Audio Processing Errors

If decoding the audio buffer fails (invalid format, corruption, etc.), the error is caught and stored in state.
When an error occurs, the clearCanvas() function is automatically called to reset the UI (useVoiceVisualizer.tsx:79-85).

State Management Patterns

The hook exposes several state values for UI integration:
const {
  isRecordingInProgress,     // true when actively recording
  isPausedRecording,          // true when recording is paused
  isProcessingStartRecording, // true while waiting for permissions
  recordingTime,              // milliseconds elapsed
  formattedRecordingTime,     // formatted as "MM:SS"
  recordedBlob,               // the recorded audio Blob
  error,                      // any error that occurred
} = useVoiceVisualizer();
Example UI implementation:
{isProcessingStartRecording && <Spinner />}
{isRecordingInProgress && (
  <div>
    <span>{formattedRecordingTime}</span>
    <button onClick={togglePauseResume}>
      {isPausedRecording ? 'Resume' : 'Pause'}
    </button>
    <button onClick={stopRecording}>Stop</button>
  </div>
)}

Preloaded Audio Blobs (v2.x.x)

Starting in version 2.x.x, you can load audio from external sources using setPreloadedAudioBlob().
// From useVoiceVisualizer.tsx:136-149
const setPreloadedAudioBlob = (blob: Blob) => {
  if (blob instanceof Blob) {
    clearCanvas();
    setIsPreloadedBlob(true);
    setIsCleared(false);
    _setIsProcessingAudioOnComplete(true);
    setIsRecordingInProgress(false);
    setRecordingTime(0);
    setIsPausedRecording(false);
    audioRef.current = new Audio();
    setRecordedBlob(blob);
    void processBlob(blob);
  }
};
Usage example:
const controls = useVoiceVisualizer();

const handleFileUpload = (event: ChangeEvent<HTMLInputElement>) => {
  const file = event.target.files?.[0];
  if (file) {
    controls.setPreloadedAudioBlob(file);
  }
};

return (
  <div>
    <input type="file" accept="audio/*" onChange={handleFileUpload} />
    <VoiceVisualizer controls={controls} />
  </div>
);

Next Steps

Visualization

Learn how audio data is rendered on canvas

Playback

Understand audio playback controls

Customization

Customize recording UI and behavior

Hook API

Complete useVoiceVisualizer API reference

Build docs developers (and LLMs) love