Voice-First AI Navigation
Build voice-first experiences where an AI agent can understand natural speech, navigate your app’s UI automatically, and execute frontend or backend functions in real time — all powered by seamless voice interaction, without clicks.
Quick start
Get your voice-enabled application up and running in minutes
Set up your backend
Configure your Express server with NAVAI routes to handle client secrets and function execution.
Set your
OPENAI_API_KEY environment variable to enable the Realtime API integration.Add voice to your React app
Use the
useWebVoiceAgent hook to add voice interaction to your frontend.Define voice-triggered functions
Create functions that can be executed via voice commands.Learn more in the function execution guide.
Explore by platform
Choose your development platform to get started
Web applications
Build voice-enabled React web apps with automatic UI navigation
Mobile apps
Add voice to React Native and Expo apps with WebRTC transport
Backend services
Set up secure backend routes for client secrets and function execution
Core features
Everything you need for voice-first experiences
Real-time voice interaction
Powered by OpenAI Realtime API for natural, low-latency conversations
UI navigation
Let users navigate your app with voice commands — no clicks required
Function execution
Execute frontend and backend functions dynamically via voice
Multi-platform support
Works with React, React Native, Expo, and any backend framework
Secure credentials
Ephemeral client secrets for secure OpenAI Realtime API access
Multilingual
Support multiple languages with customizable accents and tones
Ready to build voice-first?
Get started with the quickstart guide or explore the API reference to integrate NAVAI into your application.
