Skip to main content
The @navai/voice-backend package provides the server-side infrastructure for NAVAI voice applications. It handles two critical responsibilities:
  1. Minting secure ephemeral client secrets for OpenAI Realtime API
  2. Discovering, validating, and executing backend functions from your codebase

What it provides

Client secret generation

The package securely mints ephemeral client_secret tokens by proxying requests to OpenAI’s Realtime API. This keeps your OpenAI API key secure on the server while allowing frontend clients to establish WebRTC connections.

Backend function system

The dynamic function loading system automatically discovers callable functions in your codebase and exposes them as tools that the AI agent can invoke. Functions are loaded from configurable directories and validated at runtime.

Express routes

Out-of-the-box Express middleware registers these HTTP endpoints:
  • POST /navai/realtime/client-secret - Generate client secrets
  • GET /navai/functions - List available backend functions
  • POST /navai/functions/execute - Execute a backend function

Architecture

The package has three internal layers:
1

Entry layer (index.ts)

Exposes the public API, client secret helpers, and Express route registration
2

Discovery layer (runtime.ts)

Resolves NAVAI_FUNCTIONS_FOLDERS, scans files, applies path matching rules, and builds module loaders
3

Execution layer (functions.ts)

Imports matched modules, transforms exports into normalized tool definitions, and executes them safely

Request flow

Here’s how a typical request flows through the system:
  1. Frontend/mobile calls POST /navai/realtime/client-secret
  2. Backend validates options and API key policy
  3. Backend calls OpenAI POST https://api.openai.com/v1/realtime/client_secrets
  4. Frontend/mobile calls GET /navai/functions to discover allowed tools
  5. Agent calls POST /navai/functions/execute with function_name and payload
  6. Backend executes only tool names loaded in the registry

Installation

npm install @navai/voice-backend express
express is a peer dependency and must be installed separately.

Quick example

Here’s a minimal Express server with NAVAI backend routes:
import express from "express";
import { registerNavaiExpressRoutes } from "@navai/voice-backend";

const app = express();
app.use(express.json());

registerNavaiExpressRoutes(app, {
  backendOptions: {
    openaiApiKey: process.env.OPENAI_API_KEY,
    defaultModel: "gpt-realtime",
    defaultVoice: "marin",
    clientSecretTtlSeconds: 600
  }
});

app.listen(3000, () => {
  console.log("API running on http://localhost:3000");
});
This registers all three NAVAI endpoints and automatically loads backend functions from src/ai/functions-modules.

Next steps

Express setup

Complete Express integration guide with CORS and environment variables

Client secrets

Learn how client secret generation works and configure TTL

Functions

Create backend functions that the AI agent can invoke

Other frameworks

Use NAVAI with Laravel, Django, Rails, or custom backends

Build docs developers (and LLMs) love