When recording stops, the library processes the AudioBuffer to generate bar data:
// From VoiceVisualizer.tsx:262-270 and getBarsData.ts:3-45const bufferData = bufferFromRecordedBlob.getChannelData(0);run({ bufferData, // Float32Array of PCM samples height: canvasCurrentHeight, width: canvasWidth, barWidth: formattedBarWidth, gap: formattedGap,});
The getBarsData function processes the raw PCM data:
// From getBarsData.ts:3-45export const getBarsData = ({ bufferData, height, width, barWidth, gap,}: GetBarsDataParams): BarsData[] => { // Calculate how many bars fit in the canvas const units = width / (barWidth + gap * barWidth); // Samples per bar const step = Math.floor(bufferData.length / units); const halfHeight = height / 2; let barsData: BarsData[] = []; let maxDataPoint = 0; // For each bar position for (let i = 0; i < units; i++) { const maximums: number[] = []; let maxCount = 0; // Find average of positive samples in this segment for (let j = 0; j < step && i * step + j < bufferData.length; j++) { const result = bufferData[i * step + j]; if (result > 0) { maximums.push(result); maxCount++; } } const maxAvg = maximums.reduce((a, c) => a + c, 0) / maxCount; if (maxAvg > maxDataPoint) { maxDataPoint = maxAvg; } barsData.push({ max: maxAvg }); } // Normalize to use 95% of canvas height if (halfHeight * 0.95 > maxDataPoint * halfHeight) { const adjustmentFactor = (halfHeight * 0.95) / maxDataPoint; barsData = barsData.map((bar) => ({ max: bar.max > 0.01 ? bar.max * adjustmentFactor : 1, })); } return barsData;};
1
Segmentation
The AudioBuffer is divided into segments, one per bar
2
Averaging
Each segment’s positive samples are averaged to get bar height
3
Normalization
Heights are scaled to use 95% of available canvas height
4
Minimum Height
Bars below 0.01 are set to 1 pixel to remain visible
This processing happens in a Web Worker to avoid blocking the main thread (VoiceVisualizer.tsx:152-160). The useWebWorker hook manages the worker lifecycle.
<VoiceVisualizer controls={controls} barWidth={3} // Width of each bar in pixels gap={2} // Gap multiplier (gap = barWidth * gap)/>
The effective gap is calculated as:
// From VoiceVisualizer.tsx:143const unit = formattedBarWidth + formattedGap * formattedBarWidth;
Example:
barWidth={2}, gap={1} → 2px bars with 2px gaps (4px total per unit)
barWidth={4}, gap={0.5} → 4px bars with 2px gaps (6px total per unit)
On mobile devices (screenWidth < 768), barWidth is automatically increased by 1 pixel when gap > 0 for better visibility (VoiceVisualizer.tsx:140-142).
The canvas uses devicePixelRatio for crisp rendering on high-DPI displays:
// From VoiceVisualizer.tsx:338-356function onResize() { if (!canvasContainerRef.current || !canvasRef.current) return; indexSpeedRef.current = formattedSpeed; // Round to even number for symmetry const roundedHeight = Math.trunc( (canvasContainerRef.current.clientHeight * window.devicePixelRatio) / 2 ) * 2; setCanvasCurrentWidth(canvasContainerRef.current.clientWidth); setCanvasCurrentHeight(roundedHeight); setCanvasWidth( Math.round( canvasContainerRef.current.clientWidth * window.devicePixelRatio ) ); setIsResizing(false);}
On a Retina display (devicePixelRatio = 2), a 400px wide canvas will have an internal resolution of 800px, but be styled to display at 400px. This prevents blurry visualization.
When recorded audio is present, resize is debounced to avoid excessive re-rendering. The isProcessingOnResize state indicates when the waveform is being regenerated (VoiceVisualizer.tsx:171-173).