NAVAI uses OpenAI’s Realtime API to provide low-latency voice interactions with automatic function execution.
What is NAVAI?
NAVAI transforms your web and mobile applications into voice-controlled interfaces. Users can navigate between screens, trigger actions, and interact with your app using natural language commands. Example interactions:- “Take me to the profile page”
- “Open settings”
- “Log me out”
- “Show my billing information”
Key features
Voice navigation
AI agents automatically navigate to any route in your app based on natural language commands
Function execution
Execute frontend and backend functions through voice commands with automatic tool calling
Real-time interaction
Low-latency voice responses powered by OpenAI’s Realtime API
Multi-platform
Support for web (React), mobile (React Native/Expo), and custom backends
Architecture
NAVAI consists of three core packages that work together:Backend package
@navai/voice-backend creates ephemeral client secrets and exposes backend tool routes:
POST /navai/realtime/client-secret- Generate OpenAI Realtime API credentialsGET /navai/functions- List available backend functionsPOST /navai/functions/execute- Execute backend functions via voice commands
Frontend package
@navai/voice-frontend builds a voice runtime for web applications:
- React hooks for voice agent management
- Navigation integration with React Router
- Local and backend function execution
- Automatic module loading for voice-enabled functions
Mobile package
@navai/voice-mobile provides React Native voice runtime:
- React Native WebRTC transport support
- Navigation integration for mobile apps
- Works with Expo and bare React Native projects
- Android and iOS support
All three packages use OpenAI’s Realtime API. The backend package secures your API key and issues temporary credentials to client applications.
Platform support
Direct support
- Web: React with Vite/SPA
- Mobile: React Native, Expo
- Backend: Node.js 20+ with Express
Compatible via adapter
These frameworks can integrate NAVAI with some additional configuration:- Next.js, Remix, Astro
- Vue, Nuxt
- Angular
- Svelte, SvelteKit
- Ionic (React/Vue/Angular)
Backend contract integration
You can implement the NAVAI HTTP routes in any backend framework:- Laravel, CodeIgniter, Symfony
- Django, FastAPI, Flask
- Rails
- Spring Boot
- ASP.NET Core
Device support
| Platform | Requirements |
|---|---|
| Web | Modern desktop/mobile browsers with microphone + WebRTC capabilities |
| Android | Device or emulator (use http://10.0.2.2:3000 for emulator backend) |
| iOS | Simulator or device (requires development build for microphone access) |
| Backend | Node.js 20+ or any backend that implements the NAVAI route contract |
How it works
Backend generates credentials
Your backend securely requests ephemeral credentials from OpenAI’s Realtime API using your API key.
Client connects to voice session
Your frontend/mobile app receives the temporary credentials and establishes a WebRTC connection to OpenAI.
User speaks commands
The AI agent processes natural language input and determines the appropriate action (navigation or function execution).
Next steps
Quickstart
Get started with a complete working example in 5 minutes
Backend setup
Configure your Express backend with NAVAI routes
Frontend setup
Integrate the voice agent into your React application
Mobile setup
Add voice navigation to React Native apps
