Overview
The sound engine provides low-level functions to play sounds in vanilla JavaScript or non-React environments. It’s built on the Web Audio API and includes automatic audio context management and buffer caching.
When to Use the Sound Engine
Use the sound engine directly when:
Building vanilla JavaScript applications
Working with non-React frameworks (Vue, Svelte, Angular)
Needing fine-grained control over audio nodes
Implementing custom audio processing
Server-side rendering requires client-only audio code
For React applications, use the useSound hook instead - it provides better ergonomics and automatic cleanup.
Installation
The sound engine is available in your project after adding sounds:
import {
getAudioContext ,
decodeAudioData ,
playSound
} from "@/lib/sound-engine" ;
Core Functions
getAudioContext
Returns the shared AudioContext instance:
function getAudioContext () : AudioContext
import { getAudioContext } from "@/lib/sound-engine" ;
const ctx = getAudioContext ();
console . log ( ctx . state ); // "running", "suspended", or "closed"
console . log ( ctx . sampleRate ); // e.g., 44100
let audioContext : AudioContext | null = null ;
export function getAudioContext () : AudioContext {
if ( ! audioContext ) {
audioContext = new AudioContext ();
}
return audioContext ;
}
Singleton AudioContext instance shared across all sounds
The AudioContext is created lazily on first call. Browsers may initially suspend the context until user interaction occurs.
AudioContext States
Context is paused, usually due to browser autoplay policy. Call ctx.resume() in response to user interaction.
Context is active and can play audio. This is the normal operating state.
Context has been shut down and cannot be used. Create a new context instead.
decodeAudioData
Decodes a base64 data URI into an AudioBuffer:
function decodeAudioData ( dataUri : string ) : Promise < AudioBuffer >
Basic Usage
With Sound Asset
Error Handling
import { decodeAudioData } from "@/lib/sound-engine" ;
const dataUri = "data:audio/mpeg;base64,//uQx..." ;
const buffer = await decodeAudioData ( dataUri );
console . log ( buffer . duration ); // Duration in seconds
console . log ( buffer . numberOfChannels ); // 1 (mono) or 2 (stereo)
console . log ( buffer . sampleRate ); // e.g., 44100 Hz
import clickSound from "@/sounds/click-elegant" ;
import { decodeAudioData } from "@/lib/sound-engine" ;
const buffer = await decodeAudioData ( clickSound . dataUri );
// buffer is now ready to play
try {
const buffer = await decodeAudioData ( dataUri );
console . log ( "Decoded successfully" , buffer . duration );
} catch ( error ) {
console . error ( "Failed to decode audio:" , error );
// Handle invalid audio data or unsupported format
}
Base64-encoded data URI in format: data:audio/mpeg;base64,...
Decoded audio buffer ready for playback. Resolves from cache if previously decoded.
Automatic Caching
Buffers are cached to avoid redundant decoding:
const bufferCache = new Map < string , AudioBuffer >();
export async function decodeAudioData ( dataUri : string ) : Promise < AudioBuffer > {
const cached = bufferCache . get ( dataUri );
if ( cached ) return cached ; // Return immediately
// Decode and cache for next time
const audioBuffer = await ctx . decodeAudioData ( bytes . buffer );
bufferCache . set ( dataUri , audioBuffer );
return audioBuffer ;
}
First call decodes the audio (takes a few milliseconds), subsequent calls return instantly from cache.
Decoding Process
The function converts base64 to binary:
Extract Base64
Split data URI and extract the base64 portion after the comma
Decode to Binary
Use atob() to convert base64 string to binary string
Create Byte Array
Copy binary string into Uint8Array for typed binary data
Decode Audio
Pass ArrayBuffer to AudioContext.decodeAudioData()
const base64 = dataUri . split ( "," )[ 1 ];
const binaryString = atob ( base64 );
const bytes = new Uint8Array ( binaryString . length );
for ( let i = 0 ; i < binaryString . length ; i ++ ) {
bytes [ i ] = binaryString . charCodeAt ( i );
}
const audioBuffer = await ctx . decodeAudioData ( bytes . buffer . slice ( 0 ));
playSound
High-level function to play a sound with options:
function playSound (
dataUri : string ,
options ?: PlaySoundOptions
) : Promise < SoundPlayback >
Basic
With Options
Multiple Sounds
import { playSound } from "@/lib/sound-engine" ;
import clickSound from "@/sounds/click-elegant" ;
// Play at default volume and speed
const playback = await playSound ( clickSound . dataUri );
// Stop playback manually
playback . stop ();
const playback = await playSound ( sound . dataUri , {
volume: 0.5 ,
playbackRate: 1.2 ,
onEnd : () => console . log ( "Finished" )
});
// Play multiple sounds simultaneously
const p1 = await playSound ( sound1 . dataUri );
const p2 = await playSound ( sound2 . dataUri , { volume: 0.3 });
const p3 = await playSound ( sound3 . dataUri , { volume: 0.6 });
// Stop all
p1 . stop ();
p2 . stop ();
p3 . stop ();
Base64-encoded audio data URI from a SoundAsset
Optional configuration object
Volume level from 0 (silent) to 1 (full volume)
Playback speed multiplier. Values < 1 slow down, > 1 speed up
Callback when sound finishes playing naturally
Promise resolving to playback control object with stop() method
PlaySoundOptions Interface
interface PlaySoundOptions {
volume ?: number ; // 0 to 1
playbackRate ?: number ; // Speed multiplier
onEnd ?: () => void ; // Completion callback
}
SoundPlayback Interface
interface SoundPlayback {
stop : () => void ; // Stop playback immediately
}
Event Handler
Game Loop
Sequential Playback
document . getElementById ( "btn" ). addEventListener ( "click" , async () => {
await playSound ( clickSound . dataUri , { volume: 0.5 });
});
Advanced Usage
Custom Audio Graph
Build custom audio processing with direct AudioContext access:
import { getAudioContext , decodeAudioData } from "@/lib/sound-engine" ;
import sound from "@/sounds/laser-zap" ;
async function playWithReverb () {
const ctx = getAudioContext ();
if ( ctx . state === "suspended" ) {
await ctx . resume ();
}
// Decode audio
const buffer = await decodeAudioData ( sound . dataUri );
// Create nodes
const source = ctx . createBufferSource ();
const convolver = ctx . createConvolver ();
const dry = ctx . createGain ();
const wet = ctx . createGain ();
// Configure
source . buffer = buffer ;
convolver . buffer = await loadImpulseResponse ();
dry . gain . value = 0.7 ;
wet . gain . value = 0.3 ;
// Connect graph
source . connect ( dry );
source . connect ( convolver );
convolver . connect ( wet );
dry . connect ( ctx . destination );
wet . connect ( ctx . destination );
// Play
source . start ( 0 );
}
Scheduling Multiple Sounds
Use AudioContext time to schedule precise playback:
import { getAudioContext , decodeAudioData } from "@/lib/sound-engine" ;
async function playBeat ( sounds : SoundAsset [], bpm : number ) {
const ctx = getAudioContext ();
const interval = 60 / bpm ; // Time between beats
for ( let i = 0 ; i < sounds . length ; i ++ ) {
const buffer = await decodeAudioData ( sounds [ i ]. dataUri );
const source = ctx . createBufferSource ();
const gain = ctx . createGain ();
source . buffer = buffer ;
gain . gain . value = 0.5 ;
source . connect ( gain );
gain . connect ( ctx . destination );
// Schedule exact start time
const startTime = ctx . currentTime + ( i * interval );
source . start ( startTime );
}
}
playBeat ([ kick , snare , hihat , snare ], 120 ); // 120 BPM
Audio Analysis
Analyze audio in real-time:
import { getAudioContext , playSound } from "@/lib/sound-engine" ;
class AudioVisualizer {
ctx : AudioContext ;
analyser : AnalyserNode ;
dataArray : Uint8Array ;
constructor () {
this . ctx = getAudioContext ();
this . analyser = this . ctx . createAnalyser ();
this . analyser . fftSize = 256 ;
this . dataArray = new Uint8Array ( this . analyser . frequencyBinCount );
// Connect analyser to destination
this . analyser . connect ( this . ctx . destination );
}
async play ( dataUri : string ) {
const buffer = await decodeAudioData ( dataUri );
const source = this . ctx . createBufferSource ();
source . buffer = buffer ;
source . connect ( this . analyser ); // Tap into audio stream
source . start ( 0 );
this . visualize ();
}
visualize () {
requestAnimationFrame (() => this . visualize ());
this . analyser . getByteFrequencyData ( this . dataArray );
// Draw visualization using dataArray
console . log ( "Frequency data:" , this . dataArray );
}
}
Dynamic Volume Control
Adjust volume during playback:
import { getAudioContext , decodeAudioData } from "@/lib/sound-engine" ;
async function playWithFade ( dataUri : string ) {
const ctx = getAudioContext ();
const buffer = await decodeAudioData ( dataUri );
const source = ctx . createBufferSource ();
const gain = ctx . createGain ();
source . buffer = buffer ;
source . connect ( gain );
gain . connect ( ctx . destination );
// Start silent
gain . gain . value = 0 ;
// Fade in over 1 second
gain . gain . linearRampToValueAtTime ( 1 , ctx . currentTime + 1 );
// Fade out 1 second before end
const fadeOutStart = ctx . currentTime + buffer . duration - 1 ;
gain . gain . setValueAtTime ( 1 , fadeOutStart );
gain . gain . linearRampToValueAtTime ( 0 , fadeOutStart + 1 );
source . start ( 0 );
}
Implementation Reference
Full source code of the sound engine:
getAudioContext
decodeAudioData
playSound
let audioContext : AudioContext | null = null ;
export function getAudioContext () : AudioContext {
if ( ! audioContext ) {
audioContext = new AudioContext ();
}
return audioContext ;
}
Browser Compatibility
The sound engine uses the Web Audio API, which is supported in:
Chrome 14+
Firefox 25+
Safari 6+
Edge 12+
Opera 15+
iOS Safari 6+
Android Chrome
All modern browsers implement autoplay policies. AudioContext may start in “suspended” state until user interaction occurs.
Handling Autoplay Policy
import { getAudioContext } from "@/lib/sound-engine" ;
// Resume context on first user interaction
document . addEventListener ( "click" , async () => {
const ctx = getAudioContext ();
if ( ctx . state === "suspended" ) {
await ctx . resume ();
console . log ( "Audio context resumed" );
}
}, { once: true });
Performance Considerations
Decoded buffers are cached indefinitely. A 100KB MP3 becomes ~4MB AudioBuffer in memory. Monitor memory usage for large sound libraries.
Each playSound() call creates new audio nodes. Modern browsers handle dozens of simultaneous sounds, but hundreds may cause performance issues.
First playback of a sound includes decoding time (1-10ms for typical UI sounds). Preload critical sounds by calling decodeAudioData() early.
Check and resume AudioContext state before playing. Suspended contexts prevent all audio output.
Best Practices
Preload Important Sounds : Call decodeAudioData() during app initialization for frequently used sounds.
Reuse AudioContext : Always use getAudioContext() instead of creating new contexts. Multiple contexts can cause audio glitches.
Clean Up Sources : Call stop() on playback objects when sounds should end. Completed sounds clean up automatically.
Handle Suspended State : Always check and resume AudioContext in response to user interaction. Never assume it starts in “running” state.
Avoid Blocking : decodeAudioData() is async. Don’t await in tight loops - batch decode or preload instead.
Troubleshooting
Check AudioContext state: getAudioContext().state
If suspended, call await ctx.resume() after user interaction
Verify browser console for Web Audio API errors
Test with headphones to rule out speaker issues
Reduce volume - values > 1 cause clipping
Check for too many simultaneous sounds
Verify audio files aren’t corrupted
Test on different device to rule out hardware issues
Call stop() on playback objects when done
Remove event listeners that reference audio objects
Completed sounds clean up automatically
Buffer cache persists - this is intentional for performance
Next Steps