Skip to main content
RemoteAudioTrack represents an audio track received from a remote participant. It provides methods to control playback volume and access audio buffers for visualization or processing.

Overview

Remote audio tracks are automatically created and managed by the SDK when a remote participant publishes an audio track. You typically interact with them through delegate callbacks rather than creating them directly.

Receiving Remote Audio

Subscribe to remote audio tracks through delegate methods:
extension MyClass: RoomDelegate {
    func room(
        _ room: Room,
        participant: RemoteParticipant,
        didSubscribe track: RemoteTrack,
        publication: RemoteTrackPublication
    ) {
        if let audioTrack = track as? RemoteAudioTrack {
            // Audio track subscribed and ready
            print("Subscribed to audio from \(participant.identity)")
        }
    }
}

Properties

volume

public var volume: Double { get set }
Controls the playback volume for this specific audio track. Range is 0.0 to 1.0:
  • 0.0: Muted (no audio playback)
  • 1.0: Full volume (default)
Example:
// Reduce volume to 50%
audioTrack.volume = 0.5

// Mute specific participant
audioTrack.volume = 0.0
The volume property controls WebRTC’s audio source volume, not system volume. Changes are applied immediately and only affect this specific track.

Inherited Properties

From the base Track class:
  • name: The track name
  • sid: The server-assigned track ID
  • kind: Always .audio for audio tracks
  • source: Track source (.microphone, etc.)
  • isMuted: Whether the remote participant has muted this track
  • trackState: Current state (.started or .stopped)
  • statistics: Real-time statistics if enabled

Methods

add(audioRenderer:)

Adds an AudioRenderer to receive audio buffers from this track.
public func add(audioRenderer: AudioRenderer)
audioRenderer
AudioRenderer
An object conforming to the AudioRenderer protocol that will receive PCM audio buffers
Use audio renderers for:
  • Audio visualization (waveforms, level meters)
  • Recording remote audio
  • Custom audio processing or analysis
  • Mixing multiple audio tracks
Example:
class AudioLevelMonitor: AudioRenderer {
    func render(pcmBuffer: AVAudioPCMBuffer) {
        // Calculate audio level
        let level = calculateLevel(from: pcmBuffer)
        // Update UI or trigger events
    }
}

let monitor = AudioLevelMonitor()
audioTrack.add(audioRenderer: monitor)
Audio renderers receive buffers on a background thread. Do not perform heavy processing in the render method as it may cause audio glitches. Consider using a separate processing queue.

remove(audioRenderer:)

Removes a previously added AudioRenderer.
public func remove(audioRenderer: AudioRenderer)
audioRenderer
AudioRenderer
The renderer to remove
Example:
audioTrack.remove(audioRenderer: monitor)

start()

Starts playback of the audio track. Inherited from Track.
public func start() async throws
Remote tracks are typically started automatically when subscribed. Manual control is available for advanced use cases.

stop()

Stops playback of the audio track. Inherited from Track.
public func stop() async throws

set(reportStatistics:)

Enables or disables statistics reporting at runtime.
public func set(reportStatistics: Bool) async
reportStatistics
Bool
true to enable statistics collection, false to disable

Track Lifecycle

  1. Publication: Remote participant publishes audio track
  2. Discovery: Local participant receives track publication in delegate callback
  3. Subscription: SDK automatically subscribes (or manual subscription if auto-subscribe is disabled)
  4. Playback: Audio automatically plays through device speakers
  5. Unsubscription: Track stops when participant unpublishes or leaves

Audio Playback Control

Volume Control

Control volume per-participant:
// Store participant audio tracks
var participantVolumes: [String: RemoteAudioTrack] = [:]

func room(
    _ room: Room,
    participant: RemoteParticipant,
    didSubscribe track: RemoteTrack,
    publication: RemoteTrackPublication
) {
    if let audioTrack = track as? RemoteAudioTrack {
        participantVolumes[participant.sid] = audioTrack
        
        // Set custom volume for this participant
        audioTrack.volume = 0.8
    }
}

// Later, adjust volume
if let track = participantVolumes[participantSid] {
    track.volume = newVolume
}

Mute Detection

Detect when remote participants mute/unmute:
func room(
    _ room: Room,
    participant: RemoteParticipant,
    publication: RemoteTrackPublication,
    didUpdateIsMuted isMuted: Bool
) {
    if publication.kind == .audio {
        print("\(participant.identity) audio is now \(isMuted ? "muted" : "unmuted")")
    }
}

Audio Rendering

Implement AudioRenderer for custom audio processing:
class AudioVisualizer: AudioRenderer {
    func render(pcmBuffer: AVAudioPCMBuffer) {
        guard let channelData = pcmBuffer.floatChannelData else { return }
        
        let frameLength = Int(pcmBuffer.frameLength)
        let channelCount = Int(pcmBuffer.format.channelCount)
        
        // Process audio samples
        for channel in 0..<channelCount {
            let samples = UnsafeBufferPointer(
                start: channelData[channel],
                count: frameLength
            )
            
            // Calculate RMS level for visualization
            let rms = sqrt(samples.map { $0 * $0 }.reduce(0, +) / Float(frameLength))
            
            DispatchQueue.main.async {
                self.updateVisualization(level: rms)
            }
        }
    }
    
    private func updateVisualization(level: Float) {
        // Update UI with audio level
    }
}

let visualizer = AudioVisualizer()
audioTrack.add(audioRenderer: visualizer)

Thread Safety

RemoteAudioTrack is @unchecked Sendable and thread-safe. All public methods can be called from any thread. Audio renderer callbacks are delivered on background threads.

Example Usage

class AudioManager: RoomDelegate {
    var audioTracks: [String: RemoteAudioTrack] = [:]
    let visualizer = AudioVisualizer()
    
    func room(
        _ room: Room,
        participant: RemoteParticipant,
        didSubscribe track: RemoteTrack,
        publication: RemoteTrackPublication
    ) {
        guard let audioTrack = track as? RemoteAudioTrack else { return }
        
        // Store reference
        audioTracks[participant.sid] = audioTrack
        
        // Add visualizer
        audioTrack.add(audioRenderer: visualizer)
        
        // Set volume based on participant role
        if participant.metadata?.contains("presenter") == true {
            audioTrack.volume = 1.0 // Full volume for presenter
        } else {
            audioTrack.volume = 0.7 // Reduced volume for others
        }
        
        // Enable statistics
        Task {
            await audioTrack.set(reportStatistics: true)
        }
    }
    
    func room(
        _ room: Room,
        participant: RemoteParticipant,
        didUnsubscribe track: RemoteTrack,
        publication: RemoteTrackPublication
    ) {
        if track is RemoteAudioTrack {
            audioTracks.removeValue(forKey: participant.sid)
        }
    }
    
    // Mute all remote audio
    func muteAll() {
        audioTracks.values.forEach { $0.volume = 0.0 }
    }
    
    // Restore audio
    func unmuteAll() {
        audioTracks.values.forEach { $0.volume = 1.0 }
    }
}

See Also

Build docs developers (and LLMs) love