Skip to main content

Overview

Execution provider functions help you detect and utilize hardware acceleration on different platforms. All get*Support functions return a unified AccelerationSupport object.
AccelerationSupport Type
type AccelerationSupport = {
  providerCompiled: boolean;  // Whether the provider is compiled into ORT
  hasAccelerator: boolean;    // Whether hardware accelerator is present
  canInit: boolean;           // Whether a session can initialize with this provider
};

getQnnSupport

Check QNN (Qualcomm Neural Network SDK) support on Android devices with Qualcomm NPU/DSP.
function getQnnSupport(
  modelBase64?: string
): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.

Returns

providerCompiled
boolean
Whether QNN execution provider is compiled into ONNX Runtime
hasAccelerator
boolean
Whether Qualcomm NPU/DSP hardware is present
canInit
boolean
Whether an ONNX Runtime session can initialize with QNN provider
Android only. Returns { providerCompiled: false, hasAccelerator: false, canInit: false } on iOS.

getNnapiSupport

Check NNAPI (Android Neural Networks API) support on Android devices.
function getNnapiSupport(
  modelBase64?: string
): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.

Returns

providerCompiled
boolean
Whether NNAPI execution provider is compiled into ONNX Runtime
hasAccelerator
boolean
Whether NNAPI-compatible hardware is present
canInit
boolean
Whether an ONNX Runtime session can initialize with NNAPI provider
Android only. Returns all false on iOS. NNAPI is available on Android 8.1+ (API 27+).

getXnnpackSupport

Check XNNPACK (CPU-optimized inference) support. XNNPACK provides optimized operators for ARM and x86 CPUs.
function getXnnpackSupport(
  modelBase64?: string
): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.

Returns

providerCompiled
boolean
Whether XNNPACK execution provider is compiled into ONNX Runtime
hasAccelerator
boolean
Same as providerCompiled for XNNPACK (CPU-optimized execution counts as “acceleration”)
canInit
boolean
Whether an ONNX Runtime session can initialize with XNNPACK provider
Android only. Returns all false on iOS. XNNPACK provides CPU optimizations without requiring dedicated hardware accelerators.

getCoreMlSupport

Check Core ML support on iOS devices. Core ML can leverage Apple Neural Engine (ANE) on supported devices.
function getCoreMlSupport(
  modelBase64?: string
): Promise<AccelerationSupport>

Parameters

modelBase64
string
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.

Returns

providerCompiled
boolean
Whether Core ML execution provider is compiled into ONNX Runtime (true on iOS 11+)
hasAccelerator
boolean
Whether Apple Neural Engine is present (A11 Bionic or newer)
canInit
boolean
Whether an ONNX Runtime session can initialize with Core ML provider
iOS only. Returns all false on Android. Core ML is available on iOS 11+, with ANE support on A11+ chips (iPhone 8/X and newer).

getAvailableProviders

Return the list of available ONNX Runtime execution providers on the current device.
function getAvailableProviders(): Promise<string[]>

Returns

providers
string[]
Array of provider names. Common values:
  • "CPU" - Always available
  • "NNAPI" - Android Neural Networks API
  • "QNN" - Qualcomm Neural Network SDK
  • "XNNPACK" - CPU-optimized inference
  • "CoreML" - Apple Core ML (iOS)
Requires the ORT Java bridge (libonnxruntime4j_jni.so + OrtEnvironment class) from the onnxruntime AAR on Android. On iOS, requires Core ML support in the ONNX Runtime build.

Platform Support Matrix

ProviderAndroidiOSHardware Required
CPUNone (always available)
NNAPIAndroid 8.1+ (API 27+)
QNNQualcomm NPU/DSP
XNNPACKNone (CPU optimization)
Core MLiOS 11+ (ANE on A11+)
Always check provider support before specifying a provider in initialization. Fall back to 'cpu' if the desired provider is not available.

Build docs developers (and LLMs) love