Skip to main content
Viber uses the ElevenLabs Conversational AI platform to enable voice-first interaction. The voice agent manages the conversation loop, interprets user intent, and triggers actions through client-side tool calls.

Voice agent setup

The voice agent is implemented using the @elevenlabs/react SDK:
import { useConversation } from "@elevenlabs/react";

const conversation = useConversation({
  clientTools: {
    vibe_build: async ({ prompt, action }) => { /* ... */ },
    navigate_ui: ({ panel }) => { /* ... */ },
  },
  onConnect: () => onStatusChange("connected"),
  onDisconnect: () => onStatusChange("disconnected"),
  onMessage: (message) => { /* ... */ },
  onError: (error) => { /* ... */ },
});

Connection lifecycle

The voice agent follows a standard WebSocket lifecycle:
1

Start session

await conversation.startSession({
  agentId: AGENT_ID,
  connectionType: "websocket",
});
2

Connected

Agent is ready to receive voice input and can trigger tool calls
3

End session

conversation.endSession();

Client-side tools

The voice agent uses two primary tools to control the application:

vibe_build tool

Triggers code generation or editing:
src/components/builder/voice/voice-agent.tsx
vibe_build: async ({ prompt, action }: VibeBuildParams) => {
  console.log("[VoiceAgent] vibe_build called");
  console.log("[VoiceAgent] prompt:", prompt);
  console.log("[VoiceAgent] action:", action);

  if (!prompt) {
    return "Error: Missing prompt parameter";
  }

  // For edits, require both action="edit" AND sandbox to be ready
  const isEdit = action === "edit";

  if (isEdit && !isReadyRef.current) {
    return "Error: Sandbox is not ready yet. Please wait for the workspace to finish setting up.";
  }

  if (isEdit && !sandboxIdRef.current) {
    return "Error: No sandbox available. Please create a project first.";
  }

  try {
    await onGenerateRef.current({
      prompt,
      isEdit,
      sandboxId: sandboxIdRef.current,
    });

    if (isEdit) {
      return "Starting to make those changes now.";
    } else {
      return "Generation started successfully. I will provide updates as files are created.";
    }
  } catch (error) {
    return `Error: ${error instanceof Error ? error.message : "Unknown error"}`;
  }
}
ParameterTypeDescription
promptstringUser’s description of what to build or change
action"create" | "edit"Whether to create new code or edit existing code
Changes the active panel in the UI:
src/components/builder/voice/voice-agent.tsx
navigate_ui: ({ panel }: NavigateUiParams) => {
  console.log("[VoiceAgent] navigate_ui called", { panel });

  const result = onNavigate(panel);
  console.log("[VoiceAgent] navigate_ui result:", result);

  return result.message;
}
ParameterTypeDescription
panel"preview" | "code" | "files"Target panel to navigate to

System updates

Viber sends system updates to the voice agent as code is being generated:
function sendSystemUpdate(message: string) {
  if (conversation.status === "connected") {
    // Prefix with [UPDATE] so the agent knows to repeat it verbatim
    conversation.sendUserMessage(`[UPDATE] ${message}`);
  }
}

// Usage during code generation
sendSystemUpdate("Created Hero.tsx");
sendSystemUpdate("Created Features.tsx");
sendSystemUpdate("Generation complete");
The [UPDATE] prefix is recognized by the agent’s prompt and causes it to repeat the message verbatim to the user, providing real-time narration of progress.

Message handling

The voice agent processes messages from the user and the assistant:
src/components/builder/voice/voice-agent.tsx
onMessage: (message) => {
  console.log("[VoiceAgent] Message:", message);
  if (message.message && message.source !== "user") {
    onMessage({
      role: "assistant",
      content: message.message,
      timestamp: new Date(),
    });
  }
}
Messages are displayed in the chat transcript, providing a written record of the conversation.

Audio volume monitoring

The voice agent provides real-time audio levels:
const inputVolume = conversation.getInputVolume?.() ?? 0;
const outputVolume = conversation.getOutputVolume?.() ?? 0;
These values can be used to create visual feedback (e.g., microphone icon animation) while the user is speaking.

Error handling

onError: (error) => {
  console.error("[VoiceAgent] Error:", error);
  onStatusChange("disconnected");
}

Global voice agent instance

Viber maintains a global reference to the voice agent for use throughout the application:
src/components/builder/voice/voice-agent.tsx
let globalVoiceAgent: {
  sendSystemUpdate: (message: string) => void;
  startSession: () => Promise<void>;
  endSession: () => void;
  getInputVolume: () => number;
  getOutputVolume: () => number;
} | null = null;

export function useVoiceAgentControls() {
  return {
    sendSystemUpdate: (message: string) => {
      globalVoiceAgent?.sendSystemUpdate(message);
    },
    startSession: async () => {
      await globalVoiceAgent?.startSession();
    },
    endSession: () => {
      globalVoiceAgent?.endSession();
    },
    getInputVolume: () => globalVoiceAgent?.getInputVolume() ?? 0,
    getOutputVolume: () => globalVoiceAgent?.getOutputVolume() ?? 0,
  };
}

Best practices

Since the tool functions are defined once during component initialization, use refs to access the latest values:
const sandboxIdRef = useRef(sandboxId);
const isReadyRef = useRef(isReady);

useEffect(() => {
  sandboxIdRef.current = sandboxId;
  isReadyRef.current = isReady;
}, [sandboxId, isReady]);
Tool functions should return user-friendly error messages that the agent can speak:
if (!prompt) {
  return "Error: Missing prompt parameter";
}
Console logs help debug tool calling issues:
console.log("[VoiceAgent] vibe_build called");
console.log("[VoiceAgent] prompt:", prompt);
console.log("[VoiceAgent] action:", action);

Next steps

Code agent

Learn how code generation works with Gemini

Sandbox

Explore how generated code is executed

Build docs developers (and LLMs) love