ONNX Runtime supports multiple platforms and programming languages. Choose your preferred language and platform below.
Python
ONNX Runtime for Python is available on PyPI for Windows, Linux, and macOS.
Install the CPU-only version: For nightly builds: pip install onnxruntime --pre
Install the GPU version with CUDA support: pip install onnxruntime-gpu
CUDA 12.x is required. The package automatically installs CUDA dependencies when you install with the [cuda] extra: pip install onnxruntime-gpu[cuda]
For cuDNN support: pip install onnxruntime-gpu[cuda,cudnn]
Additional execution provider packages: # OpenVINO
pip install onnxruntime-openvino
# DirectML (Windows)
pip install onnxruntime-directml
# TensorRT
pip install onnxruntime-gpu # TensorRT EP included
# ROCm (AMD GPU)
pip install onnxruntime-rocm
Requirements
Python 3.11 or later (3.11, 3.12, 3.13, 3.14 supported)
Compatible with Windows, Linux, and macOS
Optional Dependencies
# For symbolic shape inference
pip install onnxruntime[symbolic]
# For quantization utilities
pip install onnxruntime[quantization]
C/C++
ONNX Runtime provides pre-built C/C++ libraries for multiple platforms.
Download the Release Package
Download the appropriate package from GitHub Releases :
Windows : onnxruntime-win-x64-[version].zip
Linux : onnxruntime-linux-x64-[version].tgz
macOS : onnxruntime-osx-[arch]-[version].tgz
Mobile : onnxruntime-android-[version].aar, onnxruntime-ios-[version].xcframework
Extract the archive to get the include/ and lib/ directories.
Set Up Your Build System
Configure your CMakeLists.txt: cmake_minimum_required ( VERSION 3.28)
project (MyProject)
# Set paths to ONNX Runtime
set (ORT_HEADER_DIR "path/to/onnxruntime/include" )
set (ORT_LIBRARY_DIR "path/to/onnxruntime/lib" )
# Link against ONNX Runtime
include_directories ( ${ORT_HEADER_DIR} )
link_directories ( ${ORT_LIBRARY_DIR} )
add_executable (myapp main.cpp)
target_link_libraries (myapp onnxruntime)
Build your project: cmake -S . -B build -DORT_HEADER_DIR=/path/to/include -DORT_LIBRARY_DIR=/path/to/lib
cmake --build build --config Release
Compile and link manually: # Linux/macOS
g++ -std=c++17 main.cpp -I/path/to/onnxruntime/include \
-L/path/to/onnxruntime/lib -lonnxruntime -o myapp
# Windows (MSVC)
cl.exe main.cpp /I"path \\ to \\ onnxruntime \\ include" \
/link /LIBPATH:"path \\ to \\ onnxruntime \\ lib" onnxruntime.lib
Include the Header
In your C++ code: #include "onnxruntime_cxx_api.h"
Make sure the ONNX Runtime shared library (.so, .dll, or .dylib) is in your system’s library path at runtime.
Build from Source
For advanced users who need custom builds:
# Clone the repository
git clone --recursive https://github.com/microsoft/onnxruntime.git
cd onnxruntime
# Build (Linux/macOS)
./build.sh --config Release --build_shared_lib --parallel
# Build (Windows)
. \\ build.bat --config Release --build_shared_lib --parallel
See the build documentation for detailed instructions.
ONNX Runtime is available as NuGet packages for .NET applications.
CPU
GPU (CUDA)
DirectML (Windows)
Mobile
Install the managed and native packages: # Managed API
dotnet add package Microsoft.ML.OnnxRuntime
# Or via NuGet Package Manager
Install-Package Microsoft.ML.OnnxRuntime
The native libraries are automatically included. For GPU support: dotnet add package Microsoft.ML.OnnxRuntime.Gpu
For DirectML execution provider: dotnet add package Microsoft.ML.OnnxRuntime.DirectML
For Xamarin and MAUI applications: # iOS
dotnet add package Microsoft.ML.OnnxRuntime
# Android
dotnet add package Microsoft.ML.OnnxRuntime
The package includes targets for iOS, Android, and other mobile platforms.
Requirements
.NET 6.0 or later
Compatible with .NET Framework 4.6.2+, .NET Core 3.1+, .NET 5+
Supports Windows, Linux, macOS, iOS, and Android
Local Build
To build the NuGet managed package locally:
# Restore dependencies
msbuild -t:restore . \\ src \\ Microsoft.ML.OnnxRuntime \\ Microsoft.ML.OnnxRuntime.csproj
# Build
msbuild -t:build . \\ src \\ Microsoft.ML.OnnxRuntime \\ Microsoft.ML.OnnxRuntime.csproj
# Create package
msbuild . \\ OnnxRuntime.CSharp.proj -t:CreatePackage -p:Configuration=Release
./build.sh --config Release --build_nuget
The .nupkg file will be in build/Release.
Java
ONNX Runtime for Java is available on Maven Central.
Maven
Add to your pom.xml:
< dependency >
< groupId > com.microsoft.onnxruntime </ groupId >
< artifactId > onnxruntime </ artifactId >
< version > 1.25.0 </ version >
</ dependency >
For GPU support:
< dependency >
< groupId > com.microsoft.onnxruntime </ groupId >
< artifactId > onnxruntime_gpu </ artifactId >
< version > 1.25.0 </ version >
</ dependency >
Gradle
Add to your build.gradle:
dependencies {
implementation 'com.microsoft.onnxruntime:onnxruntime:1.25.0'
}
Requirements
Java 8 or later (Java 11+ required for building)
Compatible with Windows, Linux, and macOS
Build from Source
To build the Java binding:
# From the repository root
./build.sh --build_java --config Release
# The JAR will be in build/[OS]/Release/java/build/libs/
See the Java API build instructions for more details.
JavaScript
ONNX Runtime provides packages for Node.js and web browsers.
Node.js
Web (Browser)
React Native
Install for Node.js applications: npm install onnxruntime-node
For development/nightly builds: npm install onnxruntime-node@dev
The package includes pre-built binaries for:
Windows (x64, arm64)
Linux (x64, arm64)
macOS (x64, arm64)
CUDA binaries are automatically downloaded for Linux x64. Skip CUDA Installation To skip automatic CUDA EP installation: npm install onnxruntime-node --onnxruntime-node-install=skip
Install for browser applications: npm install onnxruntime-web
For WebGPU support: npm install onnxruntime-web
# WebGPU is included in the default package
Bundle Options ONNX Runtime Web provides multiple bundle files for different use cases: Bundle Size WebGL WASM WebGPU ort.all.min.js682 KB ✓ ✓ ✓ ort.min.js434 KB ✓ ✓ ✗ ort.webgl.min.js411 KB ✓ ✗ ✗ ort.webgpu.min.js293 KB ✗ ✓ ✓ ort.wasm.min.js46 KB ✗ ✓ ✗
CDN Usage You can also use ONNX Runtime Web from a CDN: < script src = "https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.min.js" ></ script >
Install for React Native applications: npm install onnxruntime-react-native
React Native requires additional setup for iOS and Android. See the React Native documentation for platform-specific configuration.
Requirements
Node.js : v20.x or later (v16+ supported)
Electron : v28.x or later (v15+ supported)
Browsers : Modern browsers with WebAssembly support
Windows
Linux
macOS
Mobile
System Requirements
Windows 10 version 1809 or later
Visual Studio 2019 or later (for building from source)
Windows SDK 10.0.17763.0 or later
GPU Support
CUDA : NVIDIA GPU with CUDA 12.x
DirectML : Windows 10 version 1903 or later, DirectX 12 capable GPU
Common Issues System Requirements
Ubuntu 20.04+ or equivalent
glibc 2.28 or later
GCC 9+ or Clang 10+ (for building from source)
GPU Support
CUDA : NVIDIA GPU with CUDA 12.x and cuDNN 9.x
ROCm : AMD GPU with ROCm 5.4+
Install Dependencies # Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y libgomp1
System Requirements
macOS 10.15 (Catalina) or later
Xcode 12.0+ (for building from source)
Apple Silicon Native ARM64 builds are available for Apple Silicon (M1/M2/M3) Macs. CoreML Support CoreML execution provider is available on both Intel and Apple Silicon Macs: # Python
import onnxruntime as ort
session = ort.InferenceSession( "model.onnx" ,
providers = [ "CoreMLExecutionProvider" ])
Android
Android API Level 24 (Android 7.0) or higher
NDK r23c or later
Gradle 7.0+
Available via:
Maven: com.microsoft.onnxruntime:onnxruntime-android
AAR files from GitHub releases
iOS
iOS 12.0 or later
Xcode 14.0+
CocoaPods 1.12+
Available via:
CocoaPods: pod 'onnxruntime-c'
XCFramework from GitHub releases
Verify Installation
After installation, verify ONNX Runtime is working:
Python
C++
C#
Java
JavaScript (Node.js)
import onnxruntime as ort
print ( f "ONNX Runtime version: { ort. __version__ } " )
print ( f "Available providers: { ort.get_available_providers() } " )
Next Steps
Now that you have ONNX Runtime installed, check out the Quickstart Guide to learn how to run inference with your first model.