AudioManager class provides centralized control over audio session configuration, device management, and audio processing for LiveKit.
Singleton Access
Audio Device Management (macOS)
List of available output devices (macOS only).
List of available input devices (macOS only).
Currently selected output device.On macOS, you can set this to change the output device.
Currently selected input device.On macOS, you can set this to change the input device.
The default output device for the system.
The default input device for the system.
Audio Session Configuration (iOS/tvOS/visionOS)
Determines whether the device’s built-in speaker or receiver is preferred for audio output.
true: Speaker is preferredfalse: Receiver is preferred
Only applies when audio output is routed to the built-in speaker or receiver. Ignored if
customConfigureAudioSessionFunc is set.Specifies a fixed configuration for the audio session, overriding dynamic adjustments.When set, takes precedence over dynamic configuration logic including
isSpeakerOutputPreferred.Ignored if
customConfigureAudioSessionFunc is set.Voice Processing
The main flag that determines whether to enable Voice-Processing I/O of the internal AVAudioEngine.Setting this to
false prevents voice-processing initialization and muted talker detection will not work.Bypass Voice-Processing I/O of internal AVAudioEngine.Can be toggled at runtime without restarting the AudioEngine.
Enable or bypass the Auto Gain Control of internal AVAudioEngine.Can be toggled at runtime.
Audio Processing
Delegate to modify the local audio buffer before it is sent to the network.
Only one delegate can be set at a time. If you only need to observe (not modify) the buffer, use
add(localAudioRenderer:) instead.Delegate to modify the combined remote audio buffer (all tracks) before it is played to the user.
Only one delegate can be set at a time. If you only need to observe the buffer, use
add(remoteAudioRenderer:) instead.Ducking Control
Enables “advanced ducking” of other audio while using Apple’s voice processing APIs.When enabled, the system dynamically adjusts ducking based on voice activity:
- More ducking when someone is speaking
- Less ducking when neither side is speaking (SharePlay/FaceTime-like behavior)
false, which keeps a fixed ducking behavior with minimal ducking to keep other audio as loud as possible.Controls how much other audio is reduced (“ducked”) while using Apple’s voice processing APIs.Available on iOS 17+, macOS 14.0+, visionOS 1.0+.
.min: Keep other audio as loud as possible (SDK default).default: Apple’s historical fixed ducking amount.max: Better voice intelligibility, more ducking
Recording
Whether recording is kept initialized for low-latency publish.
The mute state of the internal audio engine using Voice Processing I/O mute API.Normally handled automatically, but can be set manually if needed.
Whether manual rendering (no-device) mode is enabled.
Whether the internal AVAudioEngine is currently running.
The current availability state of the audio engine.
Callbacks
Callback invoked when audio devices change.
Detect voice activity even if the mic is muted.
Internal audio engine must be initialized by calling
prepareRecording() or connecting to a room.Methods
setVoiceProcessingEnabled(_:)
Set whether voice processing is enabled.setManualRenderingMode(_:)
Enable manual rendering mode where you provide audio buffers.- Provide audio buffers via
AudioManager.shared.mixer.capture(appAudio:) - Remote audio will not play automatically
- Get remote audio with
add(remoteAudioRenderer:)or per-track renderers
setRecordingAlwaysPreparedMode(_:)
Prepare the microphone capture pipeline for low-latency publishing.- Audio engine starts configured for mic input in muted state
- Keeps recording initialized and pre-warms voice processing
- Persists across Room lifecycles until disabled
startLocalRecording()
Start mic input to the SDK even without a Room or connection.LocalAudioTrack renderers and capturePostProcessingDelegate.
stopLocalRecording()
Stop mic input started withstartLocalRecording().
setEngineAvailability(_:)
Set whether the internal AVAudioEngine is allowed to run.add(localAudioRenderer:)
Add a renderer to receive PCM buffers from local input (mic).add(remoteAudioRenderer:)
Add a renderer to receive PCM buffers from combined remote audio.RemoteAudioTrack.add(audioRenderer:) instead.
set(engineObservers:)
Set a chain ofAudioEngineObservers.
AudioSessionEngineObserver initially.