Installation
Install the ONNX export dependencies:Basic export
- Object detection
- Image segmentation
output directory by default.
Export parameters
Directory where the exported ONNX model will be saved.
Path to an image file to use for tracing. If not provided, a random dummy image is generated.
Deprecated and ignored. ONNX simplification is no longer run by
export().Export only the backbone feature extractor instead of the full model.
ONNX opset version to use for export. Higher versions support more operations.
Whether to print verbose export information.
Deprecated and ignored.
Input shape as a
(height, width) tuple. Both dimensions must be divisible by patch_size × num_windows (varies by model variant — typically 14, 16, 24, or 32). If not provided, uses the model’s default square resolution.Static batch size to bake into the exported ONNX graph.
When
True, exports with a dynamic batch dimension so the ONNX model accepts variable batch sizes at runtime instead of a fixed static size.Backbone patch size used for shape-divisibility validation. Defaults to the model’s configured
patch_size. When provided, must match the instantiated model’s patch size exactly.Output files
After export, you will find the following file in your output directory:inference_model.onnx— the exported ONNX modelbackbone_model.onnx— exported instead whenbackbone_only=True
Advanced export examples
Custom output directory
Custom input resolution
Export with a specific input resolution. Both dimensions must be divisible bypatch_size × num_windows for the target model (check the model’s config for the exact value).
Backbone only
Export only the backbone feature extractor for use in custom pipelines:Convert to TensorRT
If you want lower latency on NVIDIA GPUs, convert the exported ONNX model to a TensorRT engine using the Python API. Prerequisites:- TensorRT installed with
trtexecavailable in yourPATH - An exported ONNX model (for example,
output/inference_model.onnx)
output/inference_model.engine. If profile=True, it also writes an Nsight Systems report (.nsys-rep).
ONNX Runtime inference
Once exported, run inference with ONNX Runtime:Next steps
Deploy to Roboflow
Deploy your fine-tuned model to Roboflow for cloud inference.
Training overview
Learn how to fine-tune RF-DETR on your own dataset.