Skip to main content
ONNX Runtime supports multiple platforms and programming languages. Choose your preferred language and platform below.

Python

ONNX Runtime for Python is available on PyPI for Windows, Linux, and macOS.
Install the CPU-only version:
pip install onnxruntime
For nightly builds:
pip install onnxruntime --pre

Requirements

  • Python 3.11 or later (3.11, 3.12, 3.13, 3.14 supported)
  • Compatible with Windows, Linux, and macOS

Optional Dependencies

# For symbolic shape inference
pip install onnxruntime[symbolic]

# For quantization utilities
pip install onnxruntime[quantization]

C/C++

ONNX Runtime provides pre-built C/C++ libraries for multiple platforms.
1

Download the Release Package

Download the appropriate package from GitHub Releases:
  • Windows: onnxruntime-win-x64-[version].zip
  • Linux: onnxruntime-linux-x64-[version].tgz
  • macOS: onnxruntime-osx-[arch]-[version].tgz
  • Mobile: onnxruntime-android-[version].aar, onnxruntime-ios-[version].xcframework
Extract the archive to get the include/ and lib/ directories.
2

Set Up Your Build System

Configure your CMakeLists.txt:
cmake_minimum_required(VERSION 3.28)
project(MyProject)

# Set paths to ONNX Runtime
set(ORT_HEADER_DIR "path/to/onnxruntime/include")
set(ORT_LIBRARY_DIR "path/to/onnxruntime/lib")

# Link against ONNX Runtime
include_directories(${ORT_HEADER_DIR})
link_directories(${ORT_LIBRARY_DIR})

add_executable(myapp main.cpp)
target_link_libraries(myapp onnxruntime)
Build your project:
cmake -S . -B build -DORT_HEADER_DIR=/path/to/include -DORT_LIBRARY_DIR=/path/to/lib
cmake --build build --config Release
3

Include the Header

In your C++ code:
#include "onnxruntime_cxx_api.h"
Make sure the ONNX Runtime shared library (.so, .dll, or .dylib) is in your system’s library path at runtime.

Build from Source

For advanced users who need custom builds:
# Clone the repository
git clone --recursive https://github.com/microsoft/onnxruntime.git
cd onnxruntime

# Build (Linux/macOS)
./build.sh --config Release --build_shared_lib --parallel

# Build (Windows)
.\\build.bat --config Release --build_shared_lib --parallel
See the build documentation for detailed instructions.

C#

ONNX Runtime is available as NuGet packages for .NET applications.
Install the managed and native packages:
# Managed API
dotnet add package Microsoft.ML.OnnxRuntime

# Or via NuGet Package Manager
Install-Package Microsoft.ML.OnnxRuntime
The native libraries are automatically included.

Requirements

  • .NET 6.0 or later
  • Compatible with .NET Framework 4.6.2+, .NET Core 3.1+, .NET 5+
  • Supports Windows, Linux, macOS, iOS, and Android

Local Build

To build the NuGet managed package locally:
# Restore dependencies
msbuild -t:restore .\\src\\Microsoft.ML.OnnxRuntime\\Microsoft.ML.OnnxRuntime.csproj

# Build
msbuild -t:build .\\src\\Microsoft.ML.OnnxRuntime\\Microsoft.ML.OnnxRuntime.csproj

# Create package
msbuild .\\OnnxRuntime.CSharp.proj -t:CreatePackage -p:Configuration=Release

Java

ONNX Runtime for Java is available on Maven Central.

Maven

Add to your pom.xml:
<dependency>
    <groupId>com.microsoft.onnxruntime</groupId>
    <artifactId>onnxruntime</artifactId>
    <version>1.25.0</version>
</dependency>
For GPU support:
<dependency>
    <groupId>com.microsoft.onnxruntime</groupId>
    <artifactId>onnxruntime_gpu</artifactId>
    <version>1.25.0</version>
</dependency>

Gradle

Add to your build.gradle:
dependencies {
    implementation 'com.microsoft.onnxruntime:onnxruntime:1.25.0'
}

Requirements

  • Java 8 or later (Java 11+ required for building)
  • Compatible with Windows, Linux, and macOS

Build from Source

To build the Java binding:
# From the repository root
./build.sh --build_java --config Release

# The JAR will be in build/[OS]/Release/java/build/libs/
See the Java API build instructions for more details.

JavaScript

ONNX Runtime provides packages for Node.js and web browsers.
Install for Node.js applications:
npm install onnxruntime-node
For development/nightly builds:
npm install onnxruntime-node@dev
The package includes pre-built binaries for:
  • Windows (x64, arm64)
  • Linux (x64, arm64)
  • macOS (x64, arm64)
CUDA binaries are automatically downloaded for Linux x64.

Skip CUDA Installation

To skip automatic CUDA EP installation:
npm install onnxruntime-node --onnxruntime-node-install=skip

Requirements

  • Node.js: v20.x or later (v16+ supported)
  • Electron: v28.x or later (v15+ supported)
  • Browsers: Modern browsers with WebAssembly support

Platform-Specific Notes

System Requirements

  • Windows 10 version 1809 or later
  • Visual Studio 2019 or later (for building from source)
  • Windows SDK 10.0.17763.0 or later

GPU Support

  • CUDA: NVIDIA GPU with CUDA 12.x
  • DirectML: Windows 10 version 1903 or later, DirectX 12 capable GPU

Common Issues

Verify Installation

After installation, verify ONNX Runtime is working:
import onnxruntime as ort
print(f"ONNX Runtime version: {ort.__version__}")
print(f"Available providers: {ort.get_available_providers()}")

Next Steps

Now that you have ONNX Runtime installed, check out the Quickstart Guide to learn how to run inference with your first model.