Overview
React Native ExecuTorch is a bridge that brings Meta’s ExecuTorch runtime to React Native applications. It enables on-device AI inference by providing a JavaScript/TypeScript API layer on top of ExecuTorch’s native C++ runtime.What is ExecuTorch?
ExecuTorch is PyTorch’s solution for running ML models on edge devices. It provides:- Lightweight runtime: Optimized for mobile and embedded devices
- Edge-optimized models: Compiled
.pte(PyTorch ExecuTorch) model files - Efficient execution: Minimal memory footprint and fast inference
- Hardware acceleration: Support for device-specific backends (CPU, GPU, NPU)
Architecture Layers
React Native ExecuTorch is built in three main layers:1. JavaScript/TypeScript Layer
The top layer provides a developer-friendly API with: Modules: High-level classes for specific AI tasksLLMModule- Large Language ModelsClassificationModule- Image classificationObjectDetectionModule- Object detectionOCRModule- Optical character recognitionExecutorchModule- Generic model execution- And many more…
2. JSI Bridge Layer
JavaScript Interface (JSI) enables direct synchronous communication between JavaScript and C++:- Zero-copy data transfer: Native buffers shared between JS and C++
- Synchronous calls: No serialization overhead
- Worklet support: Frame processing on Vision Camera thread
- Global functions: Direct access to native module loaders
~/workspace/source/packages/react-native-executorch/src/index.ts:36-92
3. Native C++ Layer
The native layer handles:- Model loading: Loading
.ptefiles into memory - Input preprocessing: Image normalization, tensor creation
- Inference execution: Running forward passes through ExecuTorch
- Output postprocessing: Converting raw tensors to structured results
- Memory management: Resource cleanup and lifecycle management
4. ExecuTorch Runtime
Meta’s ExecuTorch runtime provides:- Model execution engine
- Operator implementations
- Backend delegates (CPU, GPU, etc.)
- Memory allocators
Module System
All modules extend fromBaseModule which provides core functionality:
~/workspace/source/packages/react-native-executorch/src/modules/BaseModule.ts:12-105
Specialized Modules
Vision Modules extendVisionModule for image processing:
~/workspace/source/packages/react-native-executorch/src/modules/computer_vision/VisionModule.ts:32-143
LLM Module uses a controller pattern for conversation management:
~/workspace/source/packages/react-native-executorch/src/modules/natural_language_processing/LLMModule.ts:10-187
Data Flow
Here’s how data flows through a typical inference call:Frame Processing Pipeline
For real-time vision applications with VisionCamera: Zero-copy path (recommended):Initialization
The library must be initialized with a ResourceFetcher adapter:~/workspace/source/packages/react-native-executorch/src/index.ts:94-117
Platform Support
React Native ExecuTorch supports:- iOS: Native C++ integration via CocoaPods
- Android: Native C++ integration via Gradle
- Expo: Via development builds with config plugins
- Bare React Native: Direct native module linking
Performance Characteristics
Model Loading:- One-time operation per model
- Downloads cached in app document directory
- Progress tracking available
- Synchronous execution (blocks thread)
- Typically 10-500ms depending on model size
- Can run on background thread via JSI worklets
- Models loaded into native heap
- Must call
module.delete()to release - Large models (1GB+) require careful lifecycle management
Next Steps
- Resource Fetching - Learn how models are downloaded and cached
- Model Loading - Understand the model loading process
- Error Handling - Handle errors gracefully in your app