This module provides core functionality for testing library initialization and detecting hardware acceleration support on different platforms.
testSherpaInit()
Test method to verify that the sherpa-onnx native library is loaded correctly.
function testSherpaInit () : Promise < string >
Returns
Resolves with a test message confirming the library is loaded.
Example
import { testSherpaInit } from 'react-native-sherpa-onnx' ;
const result = await testSherpaInit ();
console . log ( result ); // "Sherpa ONNX initialized successfully"
getQnnSupport()
Check Qualcomm QNN (Qualcomm Neural Network) acceleration support on Android devices.
function getQnnSupport ( modelBase64 ?: string ) : Promise < AccelerationSupport >
Parameters
Optional base64-encoded model for session initialization test. If omitted, uses an embedded test model.
Returns
Whether QNN provider is compiled into the library
Whether a QNN-compatible hardware accelerator is available
Whether a model can be initialized using QNN
Android : Full support on Qualcomm devices
iOS : Returns all false
Example
import { getQnnSupport } from 'react-native-sherpa-onnx' ;
const qnnSupport = await getQnnSupport ();
if ( qnnSupport . canInit ) {
console . log ( 'QNN acceleration available' );
// Use provider: 'qnn' in model options
}
getNnapiSupport()
Check NNAPI (Android Neural Networks API) acceleration support on Android devices.
function getNnapiSupport ( modelBase64 ?: string ) : Promise < AccelerationSupport >
Parameters
Optional base64-encoded model for session initialization test.
Returns
Android : Support varies by device and Android version
iOS : Returns all false
Example
import { getNnapiSupport } from 'react-native-sherpa-onnx' ;
const nnapiSupport = await getNnapiSupport ();
if ( nnapiSupport . canInit ) {
// Use provider: 'nnapi' in model options
}
getXnnpackSupport()
Check XNNPACK (CPU-optimized) acceleration support.
function getXnnpackSupport ( modelBase64 ?: string ) : Promise < AccelerationSupport >
Parameters
Optional base64-encoded model for session initialization test.
Returns
Whether XNNPACK provider is compiled into the library
Returns true when providerCompiled is true (CPU-optimized execution)
Whether a model can be initialized using XNNPACK
Android : Full support (CPU-optimized inference)
iOS : Returns all false
Example
import { getXnnpackSupport } from 'react-native-sherpa-onnx' ;
const xnnpackSupport = await getXnnpackSupport ();
if ( xnnpackSupport . canInit ) {
// Use provider: 'xnnpack' in model options
}
getCoreMlSupport()
Check Core ML acceleration support on iOS devices with Apple Neural Engine.
function getCoreMlSupport ( modelBase64 ?: string ) : Promise < AccelerationSupport >
Parameters
Optional base64-encoded model for session initialization test.
Returns
True on iOS 11+ (Core ML framework available)
True when Apple Neural Engine is available (A11+ chips)
Whether a model can be initialized using Core ML
iOS : Full support on iOS 11+ with Apple Neural Engine
Android : Returns all false
Example
import { getCoreMlSupport } from 'react-native-sherpa-onnx' ;
const coreMLSupport = await getCoreMlSupport ();
if ( coreMLSupport . hasAccelerator ) {
console . log ( 'Apple Neural Engine available' );
// Use provider: 'coreml' in model options
}
getAvailableProviders()
Get the list of available ONNX Runtime execution providers on the current device.
function getAvailableProviders () : Promise < string []>
Returns
Array of provider names (e.g., ["CPU", "NNAPI", "QNN", "XNNPACK"]).
Requires the ONNX Runtime Java bridge from the onnxruntime AAR.
Example
import { getAvailableProviders } from 'react-native-sherpa-onnx' ;
const providers = await getAvailableProviders ();
console . log ( 'Available providers:' , providers );
// ["CPU", "XNNPACK", "QNN"] on Qualcomm Android
// ["CPU", "CoreML"] on iOS with Neural Engine
Types
AccelerationSupport
Result type for hardware acceleration queries.
interface AccelerationSupport {
/** Whether the provider is compiled into the library */
providerCompiled : boolean ;
/** Whether compatible hardware accelerator is available */
hasAccelerator : boolean ;
/** Whether a model session can be initialized with this provider */
canInit : boolean ;
}
See Also