React Native Integration
The useMobileVoiceAgent hook provides a complete voice navigation solution for React Native applications. It handles WebRTC connections, microphone permissions, tool execution, and session management.
useMobileVoiceAgent Hook
The primary interface for voice navigation in React Native apps.
Type Signature
type UseMobileVoiceAgentOptions = {
runtime : ResolveNavaiMobileApplicationRuntimeConfigResult | null ;
runtimeLoading : boolean ;
runtimeError : string | null ;
navigate : ( path : string ) => void ;
};
type UseMobileVoiceAgentResult = {
status : "idle" | "connecting" | "connected" | "error" ;
error : string | null ;
isConnecting : boolean ;
isConnected : boolean ;
start : () => Promise < void >;
stop : () => Promise < void >;
};
function useMobileVoiceAgent (
options : UseMobileVoiceAgentOptions
) : UseMobileVoiceAgentResult ;
Parameters
runtime
ResolveNavaiMobileApplicationRuntimeConfigResult | null
required
Runtime configuration containing routes, function loaders, and backend settings. Set to null while loading.
Indicates whether the runtime configuration is still loading. Set to true during async initialization.
Error message if runtime configuration failed to load, otherwise null.
navigate
(path: string) => void
required
Navigation function to handle route changes. Typically from React Navigation: ( path ) => navigation . navigate ( path )
Return Values
status
'idle' | 'connecting' | 'connected' | 'error'
Current session state:
idle: No active session
connecting: Session is being established
connected: Active voice session
error: Session failed
Error message if status is error, otherwise null.
Convenience flag: status === 'connecting'
Convenience flag: status === 'connected'
Starts the voice session. Automatically:
Requests microphone permissions (Android)
Loads WebRTC runtime
Connects to backend
Establishes WebRTC connection
Configures agent with routes and functions
Stops the voice session and cleans up resources.
Complete React Native Example
Here’s a full working example with runtime configuration, navigation setup, and voice controls:
Step 1: Generate Module Loaders
First, generate the module loaders for your functions:
npx navai-generate-mobile-loaders
This creates a src/ai/generated-loaders.ts file with dynamic imports for your function modules.
Create your route definitions in src/ai/routes.ts:
import type { NavaiRoute } from '@navai/voice-mobile' ;
export const APP_ROUTES : NavaiRoute [] = [
{
name: 'home' ,
path: 'Home' ,
description: 'Main home screen' ,
synonyms: [ 'inicio' , 'principal' ],
},
{
name: 'profile' ,
path: 'Profile' ,
description: 'User profile settings' ,
synonyms: [ 'perfil' , 'settings' , 'ajustes' ],
},
{
name: 'notifications' ,
path: 'Notifications' ,
description: 'View notifications' ,
synonyms: [ 'notificaciones' , 'alerts' ],
},
];
Step 3: Setup Runtime Configuration
Create a hook to load runtime configuration in src/hooks/useNavaiRuntime.ts:
import { useState , useEffect } from 'react' ;
import {
resolveNavaiMobileApplicationRuntimeConfig ,
type ResolveNavaiMobileApplicationRuntimeConfigResult ,
} from '@navai/voice-mobile' ;
import { MODULE_LOADERS } from '../ai/generated-loaders' ;
import { APP_ROUTES } from '../ai/routes' ;
type UseNavaiRuntimeResult = {
runtime : ResolveNavaiMobileApplicationRuntimeConfigResult | null ;
loading : boolean ;
error : string | null ;
};
export function useNavaiRuntime () : UseNavaiRuntimeResult {
const [ runtime , setRuntime ] = useState < ResolveNavaiMobileApplicationRuntimeConfigResult | null >( null );
const [ loading , setLoading ] = useState ( true );
const [ error , setError ] = useState < string | null >( null );
useEffect (() => {
let cancelled = false ;
resolveNavaiMobileApplicationRuntimeConfig ({
moduleLoaders: MODULE_LOADERS ,
defaultRoutes: APP_ROUTES ,
env: {
NAVAI_API_URL: 'http://localhost:3000' ,
NAVAI_REALTIME_MODEL: 'gpt-realtime' ,
},
})
. then (( config ) => {
if ( ! cancelled ) {
setRuntime ( config );
setLoading ( false );
// Log warnings from runtime resolution
if ( config . warnings . length > 0 ) {
console . warn ( '[Navai Runtime]' , config . warnings );
}
}
})
. catch (( err ) => {
if ( ! cancelled ) {
setError ( err instanceof Error ? err . message : String ( err ));
setLoading ( false );
}
});
return () => {
cancelled = true ;
};
}, []);
return { runtime , loading , error };
}
Step 4: Create Voice Control Component
Implement the voice control UI in src/components/VoiceControl.tsx:
import React from 'react' ;
import { View , TouchableOpacity , Text , StyleSheet , ActivityIndicator } from 'react-native' ;
import { useMobileVoiceAgent } from '@navai/voice-mobile' ;
import { useNavigation } from '@react-navigation/native' ;
import type { NativeStackNavigationProp } from '@react-navigation/native-stack' ;
import { useNavaiRuntime } from '../hooks/useNavaiRuntime' ;
type RootStackParamList = {
Home : undefined ;
Profile : undefined ;
Notifications : undefined ;
};
export function VoiceControl () {
const navigation = useNavigation < NativeStackNavigationProp < RootStackParamList >>();
const { runtime , loading : runtimeLoading , error : runtimeError } = useNavaiRuntime ();
const {
status ,
error : sessionError ,
isConnecting ,
isConnected ,
start ,
stop ,
} = useMobileVoiceAgent ({
runtime ,
runtimeLoading ,
runtimeError ,
navigate : ( path : string ) => {
// Handle navigation from voice commands
navigation . navigate ( path as keyof RootStackParamList );
},
});
const handlePress = async () => {
if ( isConnected ) {
await stop ();
} else {
await start ();
}
};
const error = sessionError || runtimeError ;
return (
< View style = {styles. container } >
< TouchableOpacity
style = { [
styles.button,
isConnected && styles.buttonActive,
error && styles.buttonError,
]}
onPress={handlePress}
disabled={isConnecting || runtimeLoading}
>
{(isConnecting || runtimeLoading) ? (
<ActivityIndicator color="#fff" />
) : (
< Text style = {styles. buttonText } >
{ isConnected ? '🎤 Stop Voice' : '🎙️ Start Voice' }
</ Text >
)}
</ TouchableOpacity >
< Text style = {styles. status } >
Status : { status }
</ Text >
{ error && (
< View style = {styles. errorContainer } >
< Text style = {styles. errorText } > { error } </ Text >
</ View >
)}
</ View >
);
}
const styles = StyleSheet . create ({
container: {
alignItems: 'center' ,
padding: 20 ,
},
button: {
backgroundColor: '#007AFF' ,
paddingHorizontal: 32 ,
paddingVertical: 16 ,
borderRadius: 12 ,
minWidth: 200 ,
alignItems: 'center' ,
},
buttonActive: {
backgroundColor: '#34C759' ,
},
buttonError: {
backgroundColor: '#FF3B30' ,
},
buttonText: {
color: '#fff' ,
fontSize: 18 ,
fontWeight: '600' ,
},
status: {
marginTop: 12 ,
fontSize: 14 ,
color: '#8E8E93' ,
},
errorContainer: {
marginTop: 16 ,
padding: 12 ,
backgroundColor: '#FFEBEE' ,
borderRadius: 8 ,
maxWidth: '100%' ,
},
errorText: {
color: '#C62828' ,
fontSize: 14 ,
},
});
Step 5: Integrate into Your App
Add the voice control to your main navigation:
import React from 'react' ;
import { NavigationContainer } from '@react-navigation/native' ;
import { createNativeStackNavigator } from '@react-navigation/native-stack' ;
import { VoiceControl } from './components/VoiceControl' ;
import { HomeScreen } from './screens/HomeScreen' ;
import { ProfileScreen } from './screens/ProfileScreen' ;
import { NotificationsScreen } from './screens/NotificationsScreen' ;
const Stack = createNativeStackNavigator ();
export default function App () {
return (
< NavigationContainer >
< Stack . Navigator >
< Stack . Screen
name = "Home"
component = { HomeScreen }
options = {{
headerRight : () => < VoiceControl />,
}}
/>
< Stack . Screen name = "Profile" component = { ProfileScreen } />
< Stack . Screen name = "Notifications" component = { NotificationsScreen } />
</ Stack . Navigator >
</ NavigationContainer >
);
}
Permission Setup
The hook automatically handles permission requests, but you need to configure your project:
iOS Permissions
Add to ios/YourApp/Info.plist: < key > NSMicrophoneUsageDescription </ key >
< string > This app needs microphone access for voice navigation </ string >
Android Permissions
Add to android/app/src/main/AndroidManifest.xml: < uses-permission android:name = "android.permission.RECORD_AUDIO" />
< uses-permission android:name = "android.permission.INTERNET" />
< uses-permission android:name = "android.permission.MODIFY_AUDIO_SETTINGS" />
The hook automatically requests RECORD_AUDIO permission at runtime on Android 6.0+.
Error Handling
The hook provides comprehensive error handling:
const { status , error , start } = useMobileVoiceAgent ( options );
try {
await start ();
} catch ( err ) {
// Errors are also available in the `error` state
console . error ( 'Failed to start voice session:' , err );
}
// Common errors:
if ( error ?. includes ( 'permission' )) {
// Handle permission denial
Alert . alert (
'Microphone Permission Required' ,
'Please enable microphone access in settings.' ,
);
} else if ( error ?. includes ( 'WebRTC' )) {
// Handle WebRTC issues
Alert . alert ( 'Connection Error' , 'Failed to establish voice connection.' );
}
The hook automatically cleans up resources when the component unmounts. Ensure you don’t manually manage the session lifecycle unless necessary.
Advanced Usage
Custom Backend URL
Configure a custom backend URL:
const { runtime } = useNavaiRuntime ({
apiBaseUrl: 'https://api.yourapp.com' ,
});
Model Override
Use a specific OpenAI model:
const runtime = await resolveNavaiMobileApplicationRuntimeConfig ({
moduleLoaders: MODULE_LOADERS ,
defaultRoutes: APP_ROUTES ,
env: {
NAVAI_REALTIME_MODEL: 'gpt-4o-realtime-preview' ,
},
});
Custom Navigation Logic
Implement complex navigation patterns:
const { start } = useMobileVoiceAgent ({
runtime ,
runtimeLoading ,
runtimeError ,
navigate : ( path : string ) => {
// Custom navigation logic
if ( path . startsWith ( '/modal/' )) {
navigation . navigate ( 'Modal' , { screen: path . replace ( '/modal/' , '' ) });
} else if ( path . includes ( '?' )) {
const [ route , query ] = path . split ( '?' );
const params = new URLSearchParams ( query );
navigation . navigate ( route , Object . fromEntries ( params ));
} else {
navigation . navigate ( path );
}
},
});
Next Steps
Expo Setup Configure Expo projects with development builds
WebRTC Transport Understanding the transport layer
Functions Create custom voice-activated functions
Backend Routes Setup backend integration