LiveKit provides automatic audio session management for iOS, tvOS, and visionOS through the AudioSessionEngineObserver class.
Automatic Configuration
By default, the SDK automatically configures the AVAudioSession based on the state of audio tracks and the audio engine. This is handled by the AudioSessionEngineObserver, which is included in AudioManager.shared’s default engine observers.
// The audio session observer is enabled by default
let audioSession = AudioManager.shared.audioSession
Disabling Automatic Configuration
If you want to manually configure the AVAudioSession, disable automatic configuration:
AudioManager.shared.audioSession.isAutomaticConfigurationEnabled = false
It is recommended to set this before connecting to a room.
Speaker vs Receiver Output
Control whether audio routes to the speaker or receiver:
// Prefer speaker output (default)
AudioManager.shared.isSpeakerOutputPreferred = true
// Prefer receiver output (for phone-call style audio)
AudioManager.shared.isSpeakerOutputPreferred = false
Fixed Session Configuration
You can specify a fixed AVAudioSession configuration that overrides all dynamic logic:
let config = AudioSessionConfiguration(
category: .playAndRecord,
mode: .videoChat,
options: [.allowBluetooth, .defaultToSpeaker]
)
AudioManager.shared.sessionConfiguration = config
When set, this takes precedence over:
isSpeakerOutputPreferred
- Dynamic configuration based on track state
- Any other automatic adjustments
Custom Configuration Function (Deprecated)
For advanced use cases, you can provide a custom function to configure the audio session:
AudioManager.shared.customConfigureAudioSessionFunc = { newState, oldState in
// Your custom audio session configuration
let session = AVAudioSession.sharedInstance()
try? session.setCategory(.playAndRecord, mode: .voiceChat)
try? session.setActive(true)
}
This method is deprecated. Use set(engineObservers:) with a custom AudioSessionEngineObserver instead.
Voice Processing
Voice Processing I/O
The SDK uses Apple’s Voice Processing I/O by default on iOS devices for optimal audio quality:
// Check if voice processing is enabled
let isEnabled = AudioManager.shared.isVoiceProcessingEnabled
// Enable or disable voice processing (requires engine restart)
try AudioManager.shared.setVoiceProcessingEnabled(true)
Bypassing Voice Processing
You can bypass voice processing at runtime without restarting the engine:
// Bypass voice processing
AudioManager.shared.isVoiceProcessingBypassed = true
// Re-enable voice processing
AudioManager.shared.isVoiceProcessingBypassed = false
Auto Gain Control
Control the Auto Gain Control (AGC) separately:
// Enable AGC
AudioManager.shared.isVoiceProcessingAGCEnabled = true
// Disable AGC
AudioManager.shared.isVoiceProcessingAGCEnabled = false
Engine Availability
For scenarios like CallKit where you need to control when the audio engine runs:
// Prevent the engine from running
try AudioManager.shared.setEngineAvailability(.disabled(reason: "CallKit hold"))
// Allow the engine to run
try AudioManager.shared.setEngineAvailability(.enabled)
// Check current availability
let availability = AudioManager.shared.engineAvailability
When disabled:
- The engine will stop if running
- The engine will not start, even if recording/playback is requested
- When re-enabled, pending requests are honored
Ducking Control
Basic Ducking
Control how much other audio is reduced while in a call:
if #available(iOS 17.0, *) {
// Minimal ducking (default) - keep other audio loud
AudioManager.shared.duckingLevel = .min
// Apple's default ducking
AudioManager.shared.duckingLevel = .default
// Maximum ducking - best voice clarity
AudioManager.shared.duckingLevel = .max
}
Advanced Ducking
Enable dynamic ducking based on voice activity:
// Enable advanced ducking (FaceTime-like behavior)
AudioManager.shared.isAdvancedDuckingEnabled = true
With advanced ducking enabled:
- More ducking when someone is speaking
- Less ducking during silence
- Works independently of
duckingLevel
Examples
Basic Setup
class CallViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Configure for speaker output
AudioManager.shared.isSpeakerOutputPreferred = true
// Enable minimal ducking
if #available(iOS 17.0, *) {
AudioManager.shared.duckingLevel = .min
}
}
}
Manual Configuration
class AdvancedCallViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Disable automatic configuration
AudioManager.shared.audioSession.isAutomaticConfigurationEnabled = false
// Configure manually
configureAudioSession()
}
func configureAudioSession() {
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(
.playAndRecord,
mode: .videoChat,
options: [.allowBluetooth, .defaultToSpeaker]
)
try session.setActive(true)
} catch {
print("Failed to configure audio session: \(error)")
}
}
}
CallKit Integration
class CallKitManager {
func providerDidBegin(_ provider: CXProvider) {
// Prevent audio engine from starting during setup
try? AudioManager.shared.setEngineAvailability(.disabled(reason: "CallKit setup"))
}
func providerDidActivate(_ provider: CXProvider) {
// Allow audio engine to start
try? AudioManager.shared.setEngineAvailability(.enabled)
}
}