The LiveKit Swift SDK provides a flexible video processing pipeline that allows you to transform video frames in real-time before they are encoded and sent to other participants. This enables features like background blur, virtual backgrounds, filters, and custom effects.
Overview
The video processing pipeline processes each video frame through one or more VideoProcessor implementations before encoding. Common use cases include:
- Background blur and replacement
- Beauty filters and effects
- Watermarking and overlays
- Frame analysis and computer vision
- Custom transformations
VideoProcessor Protocol
Implement the VideoProcessor protocol to create custom processors:
import LiveKit
public protocol VideoProcessor {
func process(frame: VideoFrame) -> VideoFrame?
}
The processor receives a VideoFrame, processes it, and returns the transformed frame. Return nil to drop the frame.
Creating a Custom Processor
import LiveKit
import CoreImage
class GrayscaleVideoProcessor: NSObject, VideoProcessor {
private let ciContext = CIContext()
func process(frame: VideoFrame) -> VideoFrame? {
// Convert to CVPixelBuffer
guard let pixelBuffer = frame.toCVPixelBuffer() else {
return frame
}
// Create CIImage
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
// Apply grayscale filter
guard let filter = CIFilter(name: "CIPhotoEffectMono") else {
return frame
}
filter.setValue(ciImage, forKey: kCIInputImageKey)
guard let outputImage = filter.outputImage else {
return frame
}
// Render back to pixel buffer
ciContext.render(outputImage, to: pixelBuffer)
// Return new frame
return VideoFrame(
dimensions: frame.dimensions,
rotation: frame.rotation,
timeStampNs: frame.timeStampNs,
buffer: CVPixelVideoBuffer(pixelBuffer: pixelBuffer)
)
}
}
Built-in: Background Blur
The SDK includes BackgroundBlurVideoProcessor that uses Vision framework for person segmentation:
import LiveKit
// Create background blur processor
let blurProcessor = BackgroundBlurVideoProcessor(highQuality: true)
// Create video track with processor
let cameraCapturer = CameraCapturer()
let videoTrack = LocalVideoTrack.createCameraTrack(
capturer: cameraCapturer,
videoProcessor: blurProcessor
)
// Or add to existing track
await videoTrack.set(videoProcessor: blurProcessor)
Background Blur Options
// High quality segmentation (slower, more accurate)
let highQuality = BackgroundBlurVideoProcessor(highQuality: true)
// Fast segmentation (faster, less accurate)
let fastProcessor = BackgroundBlurVideoProcessor(highQuality: false)
How Background Blur Works
- Uses Vision framework’s
VNGeneratePersonSegmentationRequest
- Generates mask separating person from background
- Downscales and blurs background
- Blends foreground and blurred background using mask
// From BackgroundBlurVideoProcessor.swift:77
let processor = BackgroundBlurVideoProcessor(highQuality: true)
// Quality levels:
// - highQuality: true → .balanced (more detailed, slower)
// - highQuality: false → .fast (less detailed, faster)
BackgroundBlurVideoProcessor is available on iOS 15.0+, macOS 12.0+, tvOS 15.0+, and visionOS 1.0+.
Attaching Processors
At Track Creation
let processor = GrayscaleVideoProcessor()
let track = LocalVideoTrack.createCameraTrack(
capturer: cameraCapturer,
videoProcessor: processor
)
To Existing Track
let processor = BackgroundBlurVideoProcessor()
// Enable processor
await track.set(videoProcessor: processor)
// Disable processor
await track.set(videoProcessor: nil)
Thread Safety
Video processors are not thread safe and will be called on a dedicated serial processing queue. Do not share mutable state across processor instances.
@objcMembers
public final class BackgroundBlurVideoProcessor:
NSObject,
@unchecked Sendable, // Marked @unchecked Sendable
VideoProcessor,
Loggable
{
// Processor runs on dedicated serial queue
// Safe to use non-thread-safe APIs like CIContext
}
Optimization Tips
- Reuse resources: Create filters and contexts once, reuse across frames
class OptimizedProcessor: NSObject, VideoProcessor {
// Reuse context (expensive to create)
private let ciContext = CIContext.metal()
private let filter = CIFilter.gaussianBlur()
func process(frame: VideoFrame) -> VideoFrame? {
// Reuse filter
filter.inputImage = CIImage(cvPixelBuffer: buffer)
// ...
}
}
- Cache pixel buffers: Avoid reallocating buffers
// From BackgroundBlurVideoProcessor.swift:170
private var cachedPixelBuffer: CVPixelBuffer?
private var cachedPixelBufferSize: CGSize?
private func getOutputBuffer(of size: CGSize) -> CVPixelBuffer? {
if cachedPixelBufferSize != size {
cachedPixelBuffer = .metal(width: Int(size.width), height: Int(size.height))
cachedPixelBufferSize = size
}
return cachedPixelBuffer
}
- Skip processing on slower devices: Process every N frames
// From BackgroundBlurVideoProcessor.swift:45
private var frameCount = 0
#if os(macOS)
private let segmentationFrameInterval = 1 // Every frame
#else
private let segmentationFrameInterval = 3 // Every 3rd frame on iOS
#endif
func process(frame: VideoFrame) -> VideoFrame? {
frameCount += 1
guard frameCount % segmentationFrameInterval == 0 else {
return frame // Skip processing
}
// Process frame
}
- Downscale before expensive operations:
// From BackgroundBlurVideoProcessor.swift:98
let downscaleTransform = getDownscaleTransform(relativeTo: inputDimensions)
let downscaledImage = inputImage.transformed(by: downscaleTransform)
// Apply expensive blur on smaller image
blurFilter.inputImage = downscaledImage.clampedToExtent()
Frame Processing Pipeline
Camera Capturer
↓
VideoFrame (CVPixelBuffer)
↓
VideoProcessor.process()
↓
Transformed VideoFrame
↓
Video Encoder
↓
Network (RTP)
Advanced Example: Multi-Stage Processor
class MultiStageProcessor: NSObject, VideoProcessor {
private let processors: [VideoProcessor]
init(processors: [VideoProcessor]) {
self.processors = processors
}
func process(frame: VideoFrame) -> VideoFrame? {
var currentFrame: VideoFrame? = frame
for processor in processors {
guard let frame = currentFrame else {
return nil
}
currentFrame = processor.process(frame: frame)
}
return currentFrame
}
}
// Use multiple processors
let multiProcessor = MultiStageProcessor(processors: [
BackgroundBlurVideoProcessor(),
WatermarkProcessor(),
ColorGradingProcessor()
])
await track.set(videoProcessor: multiProcessor)
Frame Analysis (Non-Modifying)
class FaceDetectionProcessor: NSObject, VideoProcessor {
func process(frame: VideoFrame) -> VideoFrame? {
guard let pixelBuffer = frame.toCVPixelBuffer() else {
return frame
}
// Analyze frame (async, don't block)
Task.detached {
let request = VNDetectFaceRectanglesRequest()
try? VNImageRequestHandler(
cvPixelBuffer: pixelBuffer
).perform([request])
if let results = request.results {
print("Detected \(results.count) faces")
}
}
// Return original frame unmodified
return frame
}
}
Debugging
Enable Signposts (iOS)
// Add to BackgroundBlurVideoProcessor for profiling
#if LK_SIGNPOSTS
import os.signpost
private let signpostLog = OSLog(
subsystem: Bundle.main.bundleIdentifier ?? "",
category: "VideoProcessor"
)
func process(frame: VideoFrame) -> VideoFrame? {
os_signpost(.begin, log: signpostLog, name: "process")
defer {
os_signpost(.end, log: signpostLog, name: "process")
}
// Processing code
}
#endif
Measure Processing Time
class BenchmarkProcessor: NSObject, VideoProcessor {
func process(frame: VideoFrame) -> VideoFrame? {
let start = CFAbsoluteTimeGetCurrent()
defer {
let elapsed = CFAbsoluteTimeGetCurrent() - start
print("Processing took \(elapsed * 1000)ms")
}
// Your processing code
return processFrame(frame)
}
}
Best Practices
- Keep processing lightweight: Aim for less than 16ms (60fps) or 33ms (30fps)
- Use hardware acceleration: Prefer Metal and CoreImage over CPU operations
- Handle errors gracefully: Return original frame if processing fails
- Cache expensive resources: Reuse contexts, filters, and buffers
- Test on target devices: Performance varies significantly across devices
// Check platform support
@available(iOS 15.0, macOS 12.0, tvOS 15.0, visionOS 1.0, *)
let processor = BackgroundBlurVideoProcessor()
See Also
- LocalVideoTrack - Video track API
- Source code:
Sources/LiveKit/VideoProcessors/