Skip to main content
This guide covers common issues you might encounter while using whisper.rn and their solutions.

Build Issues

Android: Unknown host CPU architecture on Apple Silicon Macs

This error typically occurs when building Android apps on Apple Silicon (M1/M2/M3) Macs with older NDK versions.
If you encounter the build error Unknown host CPU architecture: arm64 when building for Android on Apple Silicon Macs, use one of these solutions:
If you cannot change the NDK version, you can modify the ndk-build script to run under x86_64 architecture.Edit ~/Library/Android/sdk/ndk/23.1.7779620/ndk-build:
#!/bin/sh
DIR="$(cd "$(dirname "$0")" && pwd)"
arch -x86_64 /bin/bash $DIR/build/ndk-build "$@"
This workaround runs the NDK build under Rosetta 2 translation, which may be slower than using a native ARM64-compatible NDK version.

iOS: Build fails with Core ML errors

If you encounter Core ML-related build errors on iOS:
  1. Disable Core ML if not needed Add this to your Podfile before use_react_native!:
    ENV['RNWHISPER_DISABLE_COREML'] = '1'
    
  2. Clean build folder
    cd ios
    rm -rf Pods Podfile.lock build
    pod install
    
  3. Rebuild the app
    cd ..
    yarn ios
    

iOS: Xcode build errors after updating

Xcode sometimes caches old build artifacts. Clean them:
rm -rf ~/Library/Developer/Xcode/DerivedData
cd ios
rm -rf Pods Podfile.lock
pod install
Then rebuild your app.

Build from source instead of prebuilt framework

By default, whisper.rn uses a prebuilt rnwhisper.xcframework for iOS. To build from source: Add this to your Podfile before use_react_native!:
ENV['RNWHISPER_BUILD_FROM_SOURCE'] = '1'
Then reinstall pods:
cd ios
rm -rf Pods Podfile.lock
pod install

Runtime Issues

App crashes with large models

Large models (medium, large) require significant memory and may crash on devices with limited RAM.
Solutions:
  1. Use quantized models - They use less memory with minimal accuracy loss. See Model Formats for details.
  2. iOS: Enable Extended Virtual Addressing For large models on iOS, add this entitlement to your app: In Xcode: Signing & Capabilities > App Sandbox > Hardware > Extended Virtual Addressing Or add to your entitlements file:
    <key>com.apple.developer.kernel.extended-virtual-addressing</key>
    <true/>
    
  3. Reduce model size - Use a smaller model like tiny, base, or small instead of medium or large.

Transcription is slow

Make sure you’re testing in Release mode, not Debug mode:
# iOS
yarn ios --mode Release

# Android  
yarn android --mode release
Debug builds are significantly slower than release builds.
Check if GPU acceleration is active:
const context = await initWhisper({
  filePath: modelPath,
  useGpu: true, // Should be true (default)
})

console.log('GPU enabled:', context.gpu)
if (!context.gpu) {
  console.log('Reason:', context.reasonNoGPU)
}
iOS uses Metal acceleration by default if available.
The default thread count (2 for 4-core devices, 4 for more cores) is optimal for most devices, but you can experiment:
const result = await context.transcribe(audioPath, {
  maxThreads: 4, // Try different values
})
Do not use all CPU cores or fewer than 2 threads. This typically degrades performance.

Microphone permission issues

iOS: Add microphone usage description to ios/YourApp/Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>This app needs microphone access for speech transcription</string>
Android:
  1. Add permission to android/app/src/main/AndroidManifest.xml:
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    
  2. Request permission at runtime:
    import { PermissionsAndroid, Platform } from 'react-native'
    
    if (Platform.OS === 'android') {
      const granted = await PermissionsAndroid.request(
        PermissionsAndroid.PERMISSIONS.RECORD_AUDIO
      )
      if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
        console.error('Microphone permission denied')
        return
      }
    }
    

JSI binding errors

If you see errors like “JSI binding not installed”, ensure you’re calling initWhisper() or initWhisperVad() before using ArrayBuffer methods.
JSI bindings are automatically installed when you initialize a context:
// This installs JSI bindings
const whisperContext = await initWhisper({ filePath: modelPath })
const vadContext = await initWhisperVad({ filePath: vadModelPath })

// Now you can use ArrayBuffer methods
await whisperContext.transcribeData(arrayBuffer)
await vadContext.detectSpeechData(arrayBuffer)

Audio format errors

Whisper requires specific audio format: 16kHz sample rate, mono (1 channel), 16-bit PCM.
If transcription fails or produces poor results, verify your audio format:
// Correct format for whisper
const audioConfig = {
  sampleRate: 16000,  // Must be 16kHz
  channels: 1,        // Must be mono
  bitsPerSample: 16,  // Must be 16-bit
}
For WAV files, you can use ffmpeg to convert:
ffmpeg -i input.wav -ar 16000 -ac 1 -sample_fmt s16 output.wav

Asset Loading Issues

Model file not found

For bundled assets:
  1. Add file extensions to Metro config (metro.config.js):
    module.exports = {
      resolver: {
        assetExts: ['bin', 'mil', 'wav', 'mp3'],
      },
    }
    
  2. Use require() to load bundled models:
    const context = await initWhisper({
      filePath: require('../assets/ggml-base.en.bin'),
    })
    
For file system paths:
import RNFS from 'react-native-fs'

const modelPath = `${RNFS.DocumentDirectoryPath}/ggml-base.en.bin`
const context = await initWhisper({ filePath: modelPath })

Large model files fail to bundle

React Native packager has a ~2GB file size limit for bundled assets.
For large models:
  1. Download at runtime instead of bundling:
    import RNFS from 'react-native-fs'
    
    const modelUrl = 'https://example.com/ggml-large.bin'
    const modelPath = `${RNFS.DocumentDirectoryPath}/ggml-large.bin`
    
    await RNFS.downloadFile({
      fromUrl: modelUrl,
      toFile: modelPath,
    }).promise
    
    const context = await initWhisper({ filePath: modelPath })
    
  2. Use quantized models - They’re smaller and often faster on mobile.

Memory Issues

Memory leaks

Always release contexts when done to prevent memory leaks.
// Release individual contexts
await whisperContext.release()
await vadContext.release()

// Or release all contexts at once
import { releaseAllWhisper, releaseAllWhisperVad } from 'whisper.rn'

await releaseAllWhisper()
await releaseAllWhisperVad()

RealtimeTranscriber memory usage

Control memory usage with slice limits:
const transcriber = new RealtimeTranscriber(
  { whisperContext, vadContext, audioStream },
  {
    audioSliceSec: 30,        // Slice duration
    maxSlicesInMemory: 3,     // Keep only last 3 slices
  }
)
Monitor memory usage:
transcriber.callbacks.onStats = (stats) => {
  console.log('Memory usage:', stats.memoryUsage)
  console.log('Slices in memory:', stats.memoryUsage.slicesInMemory)
}

ProGuard Issues (Android)

Android ProGuard may strip necessary native code if not configured properly.
Add this rule to android/app/proguard-rules.pro:
-keep class com.rnwhisper.** { *; }

Getting Help

If your issue isn’t covered here:
  1. Check GitHub Issues for similar problems
  2. Review whisper.cpp documentation for core functionality
  3. Open a new issue with:
    • Platform (iOS/Android) and version
    • whisper.rn version
    • Model size and type
    • Minimal reproduction code
    • Error messages and logs

Build docs developers (and LLMs) love