Execution Providers in C/C++
Execution Providers (EPs) enable ONNX Runtime to execute models on different hardware accelerators like GPUs, NPUs, and other specialized devices.Available Providers
GetAvailableProviders
out_ptr: Array of provider name strings (must be freed withReleaseAvailableProviders)provider_length: Number of providers
NULL on success
Example:
CUDA Execution Provider
OrtCUDAProviderOptions
SessionOptionsAppendExecutionProvider_CUDA
options: Session optionscuda_options: CUDA provider configuration
CUDA Provider V2 (Advanced)
ROCm Execution Provider
OrtROCMProviderOptions
SessionOptionsAppendExecutionProvider_ROCM
TensorRT Execution Provider
OrtTensorRTProviderOptions
SessionOptionsAppendExecutionProvider_TensorRT
TensorRT Provider V2
OpenVINO Execution Provider
OrtOpenVINOProviderOptions
SessionOptionsAppendExecutionProvider_OpenVINO
MIGraphX Execution Provider
OrtMIGraphXProviderOptions
SessionOptionsAppendExecutionProvider_MIGraphX
Generic Provider Configuration
SessionOptionsAppendExecutionProvider
options: Session optionsprovider_name: Name of the provider (e.g., “CUDAExecutionProvider”)provider_options_keys: Array of configuration keysprovider_options_values: Array of configuration valuesnum_keys: Number of key-value pairs
Device Management
SetCurrentGpuDeviceId
device_id: Device ID (must be less than total device count)
GetCurrentGpuDeviceId
Memory Arena Configuration
CreateArenaCfg
CreateArenaCfgV2 instead.
CreateArenaCfgV2
"max_mem": Maximum memory (0 = let ORT decide)"arena_extend_strategy": 0=kNextPowerOfTwo, 1=kSameAsRequested (-1=default)"initial_chunk_size_bytes": First allocation size (-1=default)"max_dead_bytes_per_chunk": Threshold for chunk splitting (-1=default)"initial_growth_chunk_size_bytes": Second allocation size (-1=default)"max_power_of_two_extend_bytes": Max extension size for kNextPowerOfTwo (-1=default 1GB)
ReleaseArenaCfg
Custom Operators
RegisterCustomOpsLibrary_V2
options: Session optionslibrary_path: Path to shared library (.dll, .so, .dylib)