Installation Issues
Yarn Plug'n'Play (PnP) installation fails
Yarn Plug'n'Play (PnP) installation fails
Problem: Postinstall scripts fail with Yarn v3+ using PnP.Solution: Configure Yarn to use the Node Modules linker:Or set the environment variable during install:
.yarnrc.yml
iOS: Framework or headers not found
iOS: Framework or headers not found
Problem: Build fails with “sherpa_onnx.xcframework not found” or similar errors.Solution:
-
Ensure you’ve run
pod installin theiosdirectory: -
The XCFramework is downloaded automatically during
pod install. If download fails:- Check your internet connection
- Check the version tag in
third_party/sherpa-onnx-prebuilt/IOS_RELEASE_TAG - Manually download from GitHub Releases
-
Clean and rebuild:
Android: Gradle build fails
Android: Gradle build fails
Problem: Android build fails with native library errors.Solution:
-
Clean the build:
-
Clear React Native cache:
-
Rebuild:
-
Check that you’re using the minimum required versions:
- Android API 24+ (Android 7.0+)
- React Native >= 0.70
Model Issues
Error: "Model directory does not exist"
Error: "Model directory does not exist"
Problem: Initialization fails because the model path cannot be found.Causes and Solutions:For bundled assets:
- Check the asset path (e.g.,
models/your-folder) - Android: Verify files are in
android/app/src/main/assets/models/ - iOS: Verify the folder is added as a folder reference (blue folder) in Xcode under “Copy Bundle Resources”
- Rebuild the app after adding models
- Ensure the app was installed with the asset pack:
- Check if PAD is available:
- Ensure the path is absolute
- Verify the folder exists on disk
- Check file permissions
Error: "Cannot auto-detect model type"
Error: "Cannot auto-detect model type"
Problem: Model type auto-detection fails.Solution:
-
Verify the model folder contains required files for at least one model type:
- Whisper:
encoder.onnx,decoder.onnx,tokens.txt - VITS:
model.onnx,tokens.txt - Paraformer:
model.onnx - See Supported Models for complete requirements
- Whisper:
- File names are case-sensitive
-
Try specifying the model type explicitly:
-
Use detection API to debug:
Models list is empty or missing models
Models list is empty or missing models
Problem:
listAssetModels() or listModelsAtPath() returns empty or incomplete list.Solution:For bundled assets:- Verify models are in the correct location
- Android:
android/app/src/main/assets/models/ - iOS: Added as folder reference in Xcode
- Rebuild after adding models
- Confirm
getAssetPackPath()returns a valid path - Check the asset pack’s
models/directory contains folders - Install via AAB with the asset pack included
- Pass the directory that directly contains model folders
- Use
recursive: trueonly if you have nested folders:
Model is for unsupported hardware
Model is for unsupported hardware
Problem: Error indicates model requires specific hardware (RK35xx, Ascend, etc.).Solution:These models are built for specific NPU hardware not supported in React Native:
- Rockchip RK3588
- Huawei Ascend/CANN
- OM-format models
- Download standard ONNX models from sherpa-onnx model repository
- Use QNN models for Qualcomm devices (requires QNN runtime libs)
Audio Format Issues
Error: "Invalid audio format" or transcription fails
Error: "Invalid audio format" or transcription fails
Problem: Audio file cannot be transcribed or produces poor results.Solution:STT expects WAV files with specific format:In your app, check audio format before transcription:
- Sample rate: 16 kHz
- Channels: Mono (1 channel)
- Bit depth: 16-bit PCM
TTS audio playback issues
TTS audio playback issues
Problem: Generated speech doesn’t play or sounds wrong.Solution:
-
Use the correct sample rate returned by TTS:
-
Convert Float32Array to Int16Array if needed:
- Check your audio player library supports the format
Execution Provider Issues
QNN (Qualcomm NPU) not available
QNN (Qualcomm NPU) not available
Problem: Solutions:
getQnnSupport() returns canInit: false.Diagnosis:-
Add QNN runtime libraries (most common issue):
- Download Qualcomm AI Runtime
- Copy
.sofiles toandroid/app/src/main/jniLibs/arm64-v8a/:libQnnHtp.solibQnnHtpV*Stub.so(version-specific)libQnnSystem.so- And others (see Execution Providers)
- Rebuild the app
-
Check device compatibility:
- QNN requires Qualcomm Snapdragon SoC
- Not all Snapdragon devices support HTP/NPU
-
Fallback to other providers:
NNAPI fails or is slow
NNAPI fails or is slow
Problem: NNAPI provider doesn’t work or performs worse than CPU.Understanding NNAPI behavior:Solutions:
-
Check if accelerator is available:
-
Test performance:
- NNAPI on some devices may be slower than CPU EP
- Benchmark both and choose the faster one
-
Use XNNPACK instead:
Core ML not working on iOS
Core ML not working on iOS
Problem: Core ML execution provider issues on iOS.Check ANE availability:Notes:
- Core ML is available on iOS 11+
- Apple Neural Engine (ANE) requires iOS 15+ and A12+ chip
- Simulator doesn’t have ANE
- Falls back to CPU/GPU automatically
Performance Issues
Slow transcription or generation
Slow transcription or generation
Problem: STT/TTS is too slow for your use case.Optimization strategies:
-
Use hardware acceleration:
-
Use smaller/quantized models:
- Whisper Tiny instead of Small/Base
- Int8 quantized models (automatic when
preferInt8: true)
-
Use optimized model types:
- STT: Paraformer or Zipformer (faster than Whisper)
- TTS: VITS is generally fast
-
For streaming, tune buffer sizes:
-
Profile with different providers:
High memory usage
High memory usage
Problem: App crashes or becomes sluggish due to memory usage.Solutions:
-
Always call
.destroy(): -
Don’t create multiple instances unnecessarily:
- Use smaller models
-
Monitor memory in development:
Streaming Issues
Streaming STT: No partial results
Streaming STT: No partial results
Problem: Streaming recognition doesn’t produce partial results.Solution:
-
Call
getResult()regularly: -
Check endpoint detection:
-
Ensure model supports streaming:
- Use
transducer,paraformer, ornemo_ctc - Whisper does NOT support true streaming
- Use
Streaming TTS: Audio glitches
Streaming TTS: Audio glitches
Problem: Streamed TTS audio has gaps or glitches.Solution:
-
Buffer chunks before playing:
-
Use appropriate audio player:
- Some audio libraries don’t support streaming well
- Try
react-native-track-playeror platform-specific APIs
- Increase chunk sizes (model dependent)
Runtime Errors
Error: "TurboModuleRegistry not found"
Error: "TurboModuleRegistry not found"
Problem: Native module not linked properly.Solution:
-
Clear cache and rebuild:
-
Verify React Native version:
- Minimum: React Native >= 0.70
- TurboModules are required
App crashes on initialization
App crashes on initialization
Problem: App crashes when calling
createSTT() or createTTS().Debugging steps:-
Check native logs:
-
Verify model files:
- All required files present
- Files not corrupted
- Correct model type
-
Try with a known-good model:
- Download a tested model from examples
- Verify your app works with that model first
-
Check device compatibility:
- Android API 24+ (Android 7.0+)
- iOS 13.0+
- Enable debug logging (if available in future versions)
Getting Help
If you’re still stuck after trying these solutions:GitHub Issues
Search existing issues or create a new one
Examples
Check working code examples
API Reference
Review complete API documentation
Migration Guide
Upgrade from 0.2.x to 0.3.0
When Reporting Issues
Please include:-
Environment:
- React Native version
- react-native-sherpa-onnx version
- Platform (iOS/Android) and OS version
- Device model
-
Code snippet:
- Minimal reproducible example
- How you’re initializing and using the library
-
Model information:
- Model type
- Model source/download link
- File size and structure
-
Logs:
- JavaScript errors
- Native logs (adb logcat or Xcode console)
-
Steps to reproduce:
- Exact steps to trigger the issue
- Expected vs actual behavior