Installation
Retto can be used in three different ways depending on your needs. This guide covers installation for each platform.Prerequisites
Before installing Retto, ensure you have:- Rust 1.75+ - Required for building from source
- ONNX Runtime - Automatically handled by the
ortcrate - Operating System - Linux, macOS, or Windows
For GPU acceleration, you’ll need additional setup for CUDA (NVIDIA) or DirectML (Windows).
Rust Library (retto-core)
The core library provides OCR functionality for Rust applications.Add to Cargo.toml
Feature Flags
| Feature | Description |
|---|---|
serde | Enable JSON serialization/deserialization |
backend-ort | Basic ONNX Runtime backend (CPU) |
backend-ort-cuda | CUDA execution provider for NVIDIA GPUs |
backend-ort-directml | DirectML execution provider for Windows |
backend-ort-wasm | WebAssembly backend |
hf-hub | Automatic model download from Hugging Face |
download-models | Embed models in the binary |
Verify Installation
Command-Line Tool (retto-cli)
The CLI tool is perfect for batch processing images from the command line.Build from Source
Build with GPU Support
Install Globally
retto-cli binary to your Cargo bin directory (usually ~/.cargo/bin).
Verify Installation
Obtain Models
The CLI requires ONNX model files. You have two options:- Hugging Face Hub (Recommended)
- Manual Download
If you built with the Models are downloaded from the
hf-hub feature, models will be automatically downloaded on first run:pk5ls20/PaddleModel repository on Hugging Face.WebAssembly Package (retto-wasm)
The WebAssembly package enables OCR directly in web browsers.Install from npm
Package Details
- Package Name:
@nekoimageland/retto-wasm - Current Version: 0.1.5
- Bundle Size: Varies depending on whether models are embedded
TypeScript Support
The package includes TypeScript definitions out of the box:Build from Source (Optional)
To build the WebAssembly package yourself:GPU Acceleration Setup
CUDA (NVIDIA GPUs)
For CUDA support, install:- NVIDIA Driver - Latest version recommended
- CUDA Toolkit - Version 11.x or 12.x
- cuDNN - Compatible version with your CUDA installation
DirectML (Windows)
DirectML support is built into Windows 10/11. Ensure you have:- Windows 10 version 1903 or later
- Updated graphics drivers
Troubleshooting
Error: Cannot find ONNX Runtime
Error: Cannot find ONNX Runtime
The
ort crate should automatically download ONNX Runtime. If it fails:CUDA build fails
CUDA build fails
Ensure CUDA toolkit is in your PATH:
WebAssembly module fails to load
WebAssembly module fails to load
Ensure your web server serves For development servers, this is usually configured automatically.
.wasm files with the correct MIME type:Models not downloading from Hugging Face
Models not downloading from Hugging Face
Check your network connection and try setting a mirror:
Next Steps
Now that Retto is installed, learn how to use it:Quickstart Guide
Run your first OCR with complete examples for Rust, CLI, and WebAssembly
