Skip to main content

Custom Capturer

LiveKit provides base classes and protocols for implementing custom video capturers. This is useful for advanced scenarios like AR capture, custom video sources, or specialized processing pipelines.

BufferCapturer

The BufferCapturer class allows you to capture video from CMSampleBuffer or CVPixelBuffer sources. This is the simplest way to integrate custom video sources.

Creating a Buffer Track

let options = BufferCaptureOptions(
    dimensions: .h1080_169,
    fps: 15
)

let track = LocalVideoTrack.createBufferTrack(
    name: "custom-video",
    source: .camera,
    options: options
)

Capturing Frames

From CMSampleBuffer

if let capturer = track.capturer as? BufferCapturer {
    // Capture a CMSampleBuffer (e.g., from AVCaptureVideoDataOutput)
    capturer.capture(sampleBuffer)
}

From CVPixelBuffer

if let capturer = track.capturer as? BufferCapturer {
    capturer.capture(
        pixelBuffer,
        timeStampNs: VideoCapturer.createTimeStampNs(),
        rotation: ._0
    )
}

BufferCaptureOptions

Configure buffer capture behavior:
PropertyTypeDefaultDescription
dimensionsDimensions.h1080_169Target dimensions
fpsInt15Expected frame rate
let options = BufferCaptureOptions(
    dimensions: .h720_169,
    fps: 30
)

Supported Pixel Formats

The SDK supports the following pixel formats:
let supported = VideoCapturer.supportedPixelFormats
for format in supported {
    print("Format: \(format.toString())")
}
Typically supported formats:
  • kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
  • kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
  • kCVPixelFormatType_32BGRA
  • kCVPixelFormatType_32ARGB

Creating a Custom VideoCapturer

For more advanced scenarios, you can subclass VideoCapturer to create fully custom capture implementations.

Subclassing VideoCapturer

import LiveKit
import AVFoundation

public class CustomVideoCapturer: VideoCapturer {
    private let capturer = RTC.createVideoCapturer()
    public let options: CustomCaptureOptions
    
    init(delegate: LKRTCVideoCapturerDelegate, options: CustomCaptureOptions) {
        self.options = options
        super.init(delegate: delegate)
    }
    
    public override func startCapture() async throws -> Bool {
        let didStart = try await super.startCapture()
        guard didStart else { return false }
        
        // Initialize your capture source
        try await setupCustomSource()
        
        return true
    }
    
    public override func stopCapture() async throws -> Bool {
        let didStop = try await super.stopCapture()
        guard didStop else { return false }
        
        // Clean up your capture source
        await cleanupCustomSource()
        
        return true
    }
    
    private func setupCustomSource() async throws {
        // Your custom setup logic
    }
    
    private func cleanupCustomSource() async {
        // Your custom cleanup logic
    }
    
    // Capture frames from your custom source
    func onFrameAvailable(_ pixelBuffer: CVPixelBuffer) {
        capture(
            pixelBuffer: pixelBuffer,
            capturer: capturer,
            options: options
        )
    }
}

Creating Custom Options

public final class CustomCaptureOptions: NSObject, VideoCaptureOptions {
    public let dimensions: Dimensions
    public let fps: Int
    public let customParameter: String
    
    public init(
        dimensions: Dimensions = .h720_169,
        fps: Int = 30,
        customParameter: String = "default"
    ) {
        self.dimensions = dimensions
        self.fps = fps
        self.customParameter = customParameter
    }
}

Factory Method

public extension LocalVideoTrack {
    static func createCustomTrack(
        name: String = "custom",
        options: CustomCaptureOptions = CustomCaptureOptions()
    ) -> LocalVideoTrack {
        let videoSource = RTC.createVideoSource(forScreenShare: false)
        let capturer = CustomVideoCapturer(
            delegate: videoSource,
            options: options
        )
        return LocalVideoTrack(
            name: name,
            source: .camera,
            capturer: capturer,
            videoSource: videoSource
        )
    }
}

VideoCapturer Base Class

The VideoCapturer base class provides:

Properties

  • dimensions: Dimensions? - Current capture dimensions
  • captureState: CapturerState - Current state (.stopped or .started)
  • processor: VideoProcessor? - Optional video processor for frame processing
  • delegates: MulticastDelegate<VideoCapturerDelegate> - Delegate collection

Methods

  • startCapture() async throws -> Bool - Start capturing (call super.startCapture() first)
  • stopCapture() async throws -> Bool - Stop capturing (call super.stopCapture() first)
  • restartCapture() async throws -> Bool - Restart capturing

Capture Methods

// Capture from RTCVideoFrame
func capture(frame: LKRTCVideoFrame,
             capturer: LKRTCVideoCapturer,
             device: AVCaptureDevice? = nil,
             options: VideoCaptureOptions)

// Capture from CVPixelBuffer
func capture(pixelBuffer: CVPixelBuffer,
             capturer: LKRTCVideoCapturer,
             timeStampNs: Int64 = VideoCapturer.createTimeStampNs(),
             rotation: VideoRotation = ._0,
             options: VideoCaptureOptions)

// Capture from CMSampleBuffer
func capture(sampleBuffer: CMSampleBuffer,
             capturer: LKRTCVideoCapturer,
             options: VideoCaptureOptions)

Video Processing

All capturers support optional video processing via the VideoProcessor protocol:
class CustomProcessor: VideoProcessor {
    func process(frame: VideoFrame) -> VideoFrame? {
        // Process or transform the frame
        return frame
    }
}

let processor = CustomProcessor()
let track = LocalVideoTrack.createBufferTrack(
    options: options,
    processor: processor
)
The processor is called for every captured frame before it’s sent to the encoder.

ReplayKit Integration

Use BufferCapturer for ReplayKit screen capture in broadcast extensions:
import ReplayKit
import LiveKit

class SampleHandler: RPBroadcastSampleHandler {
    var capturer: BufferCapturer?
    
    override func processSampleBuffer(
        _ sampleBuffer: CMSampleBuffer,
        with sampleBufferType: RPSampleBufferType
    ) {
        switch sampleBufferType {
        case .video:
            capturer?.capture(sampleBuffer)
        default:
            break
        }
    }
}

Timestamp Generation

Create accurate timestamps for captured frames:
let timeStampNs = VideoCapturer.createTimeStampNs()
This generates a timestamp based on system uptime in nanoseconds.

Best Practices

Dimension Requirements

  • Ensure dimensions are even numbers (required for video encoding)
  • The SDK validates with dimensions.isEncodeSafe
  • Invalid dimensions will be rejected with a warning

Frame Publishing

At least one frame must be captured before publishing the track:
let track = LocalVideoTrack.createBufferTrack(options: options)

// Start the track (begins capture)
try await track.start()

// Capture at least one frame
if let capturer = track.capturer as? BufferCapturer {
    capturer.capture(pixelBuffer)
}

// Now safe to publish
try await room.localParticipant.publish(videoTrack: track)

Thread Safety

The VideoCapturer base class is @unchecked Sendable and uses internal synchronization. When capturing frames:
  • Capture methods can be called from any thread
  • Frame processing happens on a dedicated serial queue
  • If processing is busy, frames are dropped with a warning

Resource Management

Always balance startCapture() and stopCapture() calls:
try await capturer.startCapture()
defer {
    try? await capturer.stopCapture()
}

// Use the capturer
The base class maintains a reference count, so multiple start calls require matching stop calls.

See Also

Build docs developers (and LLMs) love