On-device AI for React Native
Run LLMs, computer vision, and speech models directly on mobile devices with a declarative React Native API, powered by ExecuTorch.
// Run Llama 3.2 on device
const llm = useLLM(
model: LLAMA3_2_1B
await llm.generate(chat);
Quick start
Get running in minutes with our step-by-step guide
Install the package
Add React Native ExecuTorch to your project with your preferred package manager.
Add resource fetcher
Install the appropriate resource fetcher for your setup (Expo or bare React Native).
- Expo
- Bare React Native
Explore capabilities
Choose the AI capability that fits your use case
Large Language Models
Run Llama and Qwen models on-device with tool calling, chat history, and context strategies
Computer Vision
Image classification, object detection, segmentation, OCR, style transfer, and embeddings
Speech Processing
Speech-to-text, text-to-speech, and voice activity detection for audio applications
Text Embeddings
Generate semantic embeddings for search, similarity, and recommendations
Key features
Everything you need to build on-device AI applications
Declarative React API
Use familiar React hooks to integrate AI models without native code expertise
100% On-device
All inference runs locally on iOS and Android devices for privacy and offline use
Ready-made models
Pre-optimized models from HuggingFace ready to use with simple constants
Optimized loading
Automatic model downloading, caching, and progress tracking built-in
Resources & community
Get help and connect with other developers
GitHub Repository
View source code, report issues, and contribute to the project
Discord Community
Join our Discord server to ask questions and share your projects
HuggingFace Models
Browse our collection of optimized models ready for on-device use
Private Mind App
See React Native ExecuTorch in action in a production privacy-first AI app
Ready to build with on-device AI?
Start building privacy-first mobile AI applications with React Native ExecuTorch today.
Get Started Now