Overview
Execution provider functions help you detect and utilize hardware acceleration on different platforms. Allget*Support functions return a unified AccelerationSupport object.
AccelerationSupport Type
getQnnSupport
Check QNN (Qualcomm Neural Network SDK) support on Android devices with Qualcomm NPU/DSP.Parameters
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.
Returns
Whether QNN execution provider is compiled into ONNX Runtime
Whether Qualcomm NPU/DSP hardware is present
Whether an ONNX Runtime session can initialize with QNN provider
Android only. Returns
{ providerCompiled: false, hasAccelerator: false, canInit: false } on iOS.getNnapiSupport
Check NNAPI (Android Neural Networks API) support on Android devices.Parameters
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.
Returns
Whether NNAPI execution provider is compiled into ONNX Runtime
Whether NNAPI-compatible hardware is present
Whether an ONNX Runtime session can initialize with NNAPI provider
Android only. Returns all
false on iOS. NNAPI is available on Android 8.1+ (API 27+).getXnnpackSupport
Check XNNPACK (CPU-optimized inference) support. XNNPACK provides optimized operators for ARM and x86 CPUs.Parameters
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.
Returns
Whether XNNPACK execution provider is compiled into ONNX Runtime
Same as
providerCompiled for XNNPACK (CPU-optimized execution counts as “acceleration”)Whether an ONNX Runtime session can initialize with XNNPACK provider
Android only. Returns all
false on iOS. XNNPACK provides CPU optimizations without requiring dedicated hardware accelerators.getCoreMlSupport
Check Core ML support on iOS devices. Core ML can leverage Apple Neural Engine (ANE) on supported devices.Parameters
Optional base64-encoded test model for session initialization test. If omitted, SDK uses an embedded test model.
Returns
Whether Core ML execution provider is compiled into ONNX Runtime (true on iOS 11+)
Whether Apple Neural Engine is present (A11 Bionic or newer)
Whether an ONNX Runtime session can initialize with Core ML provider
iOS only. Returns all
false on Android. Core ML is available on iOS 11+, with ANE support on A11+ chips (iPhone 8/X and newer).getAvailableProviders
Return the list of available ONNX Runtime execution providers on the current device.Returns
Array of provider names. Common values:
"CPU"- Always available"NNAPI"- Android Neural Networks API"QNN"- Qualcomm Neural Network SDK"XNNPACK"- CPU-optimized inference"CoreML"- Apple Core ML (iOS)
Platform Support Matrix
| Provider | Android | iOS | Hardware Required |
|---|---|---|---|
| CPU | ✅ | ✅ | None (always available) |
| NNAPI | ✅ | ❌ | Android 8.1+ (API 27+) |
| QNN | ✅ | ❌ | Qualcomm NPU/DSP |
| XNNPACK | ✅ | ❌ | None (CPU optimization) |
| Core ML | ❌ | ✅ | iOS 11+ (ANE on A11+) |