Overview
Initializes a Whisper context by loading a GGML model file. The context is required for all transcription operations.Parameters
Path to the GGML model file or a
require() asset.Supported formats:- Absolute file path:
'/path/to/ggml-model.bin' - Asset require:
require('../assets/ggml-tiny.en.bin') file://URI (will be automatically normalized)
Core ML model assets for iOS encoder acceleration (iOS 15.0+, tvOS 15.0+).
Set to
true if the file path is a bundled asset (when using string paths instead of require()).Enable Core ML acceleration for the encoder on iOS. Set to
false to disable even if Core ML model files exist.Core ML models must be co-located with the GGML model file.
Enable Metal GPU acceleration on iOS/tvOS. When enabled, Core ML option will be ignored.
Enable Flash Attention optimization. Only recommended if GPU is available.
Returns
A context instance for performing transcription operations.
Example Usage
Basic Initialization
Download and Initialize
With Core ML (iOS)
Error Handling
Platform-Specific Notes
iOS
- For
mediumorlargemodels, enable the Extended Virtual Addressing entitlement. - Pre-built
rnwhisper.xcframeworkis used by default. SetRNWHISPER_BUILD_FROM_SOURCE=1in Podfile to build from source.
Android
- Add proguard rule:
-keep class com.rnwhisper.** { *; } - NDK version 24.0.8215888+ recommended for Apple Silicon Macs
Metro Configuration
When usingrequire() for model files, update metro.config.js:
Performance Tips
- Use quantized models (q8, q5) for better mobile performance
- Enable GPU acceleration with
useGpu: true(default) - Test in Release mode for accurate performance measurement
- Choose model size appropriate for your use case:
tiny/tiny.en: Fastest, less accuratebase/base.en: Good balancesmall: Better accuracymedium/large: Highest accuracy, requires more resources
See Also
- WhisperContext Methods - Available transcription methods
- releaseAllWhisper() - Release all contexts