Installation
Install the Retto CLI using Cargo:Building from Source
Clone the repository and build:target/release/retto-cli.
Feature Flags
Enable optional features during installation:Command-Line Options
The CLI accepts the following arguments:Model Paths
Path to the text detection model
Path to the text classification model
Path to the text recognition model
Path to the character dictionary file
Input Options
Path to an image file or directory. When a directory is provided, all image files will be processed recursively.
Device Selection
Execution device for inference:
cpu- CPU-only executioncuda- NVIDIA GPU (requiresbackend-ort-cudafeature)directml- DirectML (Windows, requiresbackend-ort-directmlfeature)
GPU device ID (for CUDA/DirectML backends)
HuggingFace Hub
Automatically download models from HuggingFace Hub (requires
hf-hub feature)Basic Usage
Examples
Using Local Models
Specify custom model paths:Using HuggingFace Hub
With thehf-hub feature enabled, models are automatically downloaded:
On first run, models will be downloaded from the
pk5ls20/PaddleModel repository on HuggingFace Hub and cached locally.GPU Acceleration
- CUDA
- DirectML (Windows)
Batch Processing
Process multiple directories or use shell globbing:Understanding the Output
The CLI uses structured logging with debug information:Implementation Details
Frommain.rs:71, here’s how the CLI initializes the session:
walkdir crate (main.rs:72):
Performance Tips
- Use GPU backends for large batch processing
- Process directories instead of individual files to amortize initialization overhead
- Enable HuggingFace Hub to automatically use optimized models
- Adjust logging levels in production to reduce I/O overhead
Next Steps
Backends
Learn about GPU acceleration options
Model Loading
Understand model sources and configuration
