Skip to main content
The SvaraAI frontend is a React single-page application built with TypeScript and Vite. It provides a real-time voice conversation interface with emotional intelligence powered by Hume AI.

Project structure

Frontend/
├── src/
│   ├── components/       # Reusable UI components
│   │   ├── chat.tsx     # Main voice chat interface
│   │   ├── message.tsx  # Message display with emotions
│   │   ├── controls.tsx # Call controls (mute, end call)
│   │   ├── startCall.tsx # Connection initiation
│   │   ├── expressions.tsx # Emotion visualization
│   │   ├── micFFT.tsx   # Microphone FFT visualization
│   │   └── navbar.tsx   # Navigation component
│   ├── pages/           # Route components
│   │   ├── index.tsx    # Landing/hero page
│   │   ├── playground.tsx # Voice conversation page
│   │   └── insights.tsx # Post-conversation insights
│   ├── ui/              # Base UI components
│   ├── utils/           # Utility functions
│   └── App.tsx          # Main app with routing
├── package.json
└── vite.config.ts

Core dependencies

The frontend relies on these key packages:
{
  "dependencies": {
    "react": "^19.1.0",
    "react-router-dom": "^7.6.2",
    "@humeai/voice-react": "^0.2.11",
    "framer-motion": "^12.18.1",
    "tailwindcss": "^4.1.10",
    "lucide-react": "^0.522.0"
  }
}

Routing structure

SvaraAI uses React Router for client-side navigation:
import { BrowserRouter as Router, Routes, Route } from "react-router-dom";
import Navbar from "./components/navbar";
import Hero from "./pages";
import Playground from "./pages/playground";
import Insights from "./pages/insights";

function AppContent() {
  const location = useLocation();
  // Hide navbar on playground and insights pages
  const hideNavbar = ["/playground", "/insights"].includes(location.pathname);

  return (
    <>
      {!hideNavbar && <Navbar />}
      <div className="min-h-screen bg-gradient-to-b from-[#efb1ae] via-[#FED5C7] to-[#FFE4C6]">
        <Routes>
          <Route path="/" element={<Hero />} />
          <Route path="/playground" element={<Playground />} />
          <Route path="/insights" element={<Insights />} />
        </Routes>
      </div>
    </>
  );
}
The navbar is conditionally hidden on /playground and /insights routes to provide an immersive, distraction-free experience during voice conversations.

Voice chat implementation

The voice interface is powered by Hume AI’s Voice SDK through the VoiceProvider context.

Chat interface component

The main chat interface (components/chat.tsx) wraps the entire conversation UI:
import { VoiceProvider } from "@humeai/voice-react";
import Messages from "./message";
import Controls from "./controls";
import StartCall from "./startCall";

export default function ChatInterface() {
  const apiKey = import.meta.env.VITE_HUME_API_KEY || "";
  const configId = import.meta.env.VITE_HUME_CONFIG_ID || "";

  return (
    <div className="relative grow flex flex-col mx-auto w-full overflow-hidden h-screen">
      <VoiceProvider>
        <Messages />
        <Controls />
        <StartCall apiKey={apiKey} configId={configId} />
      </VoiceProvider>
    </div>
  );
}
You must set VITE_HUME_API_KEY and VITE_HUME_CONFIG_ID environment variables for the voice interface to work. See the quickstart guide for details.

Starting a conversation

The startCall.tsx component handles connection to Hume AI:
import { useVoice, type ConnectOptions } from "@humeai/voice-react";

export default function StartCall({ apiKey, configId }: StartCallProps) {
  const { status, connect } = useVoice();
  const [isConnecting, setIsConnecting] = useState(false);

  const handleConnect = async () => {
    setIsConnecting(true);
    const connectOptions: ConnectOptions = {
      auth: { type: "apiKey", value: apiKey },
      configId: configId,
    };

    try {
      await connect(connectOptions);
    } catch {
      alert("Unable to connect. Please check microphone permissions.");
    } finally {
      setIsConnecting(false);
    }
  };

  // Shows connection button when not connected
  return status.value !== "connected" ? (
    <CallButton onClick={handleConnect} disabled={isConnecting} />
  ) : null;
}

Message display

Messages are rendered with emotion indicators (message.tsx):
import { useVoice } from "@humeai/voice-react";
import Expressions from "./expressions";

export default function Messages() {
  const { messages } = useVoice();

  return (
    <div className="overflow-auto p-6">
      {messages.map((msg, index) => {
        if (msg.type === "user_message" || msg.type === "assistant_message") {
          const isUser = msg.type === "user_message";
          
          return (
            <div className={isUser ? "ml-auto" : "mr-auto"}>
              <div className="text-sm">{msg.message.content}</div>
              
              {/* Show emotion scores if available */}
              {msg.models.prosody?.scores && (
                <Expressions values={msg.models.prosody.scores} />
              )}
            </div>
          );
        }
      })}
    </div>
  );
}
Emotion scores are provided by Hume AI’s prosody model and are available in real-time as the conversation progresses. Each message includes a models.prosody.scores object with emotion names and their confidence values.

Controls and session management

The controls.tsx component manages the conversation lifecycle:
const { isMuted, unmute, mute } = useVoice();

<Toggle
  pressed={!isMuted}
  onPressedChange={() => (isMuted ? unmute?.() : mute?.())}
>
  {isMuted ? <MicOff /> : <Mic />}
</Toggle>
const { micFft } = useVoice();
const safeMicFft = Array.isArray(micFft) ? micFft : [];

<MicFFT fft={safeMicFft} className="fill-current" />
The FFT (Fast Fourier Transform) data provides real-time frequency analysis of the microphone input for visualization.
When the user ends the call, the app:
  1. Filters valid messages (user and assistant only)
  2. Builds a transcript from message content
  3. Aggregates emotion scores across all user messages
  4. Sends data to the backend Gemini endpoint
  5. Stores the result in sessionStorage
  6. Navigates to the insights page
const handleEndCall = async () => {
  const validMessages = messages.filter(
    (msg) => msg.type === "user_message" || msg.type === "assistant_message"
  );

  // Build transcript
  const transcript = validMessages
    .map((msg) => {
      const role = msg.type === "user_message" ? "User" : "Assistant";
      return `${role}: ${msg.message.content}`;
    })
    .join("\n");

  // Aggregate emotions from user messages
  let emotions: Record<string, number> = {};
  const userMessages = validMessages.filter(msg => msg.type === "user_message");
  
  userMessages.forEach((msg) => {
    if (msg.models?.prosody?.scores) {
      Object.entries(msg.models.prosody.scores).forEach(([emotion, score]) => {
        emotions[emotion] = (emotions[emotion] || 0) + score;
      });
    }
  });

  // Average the scores
  Object.keys(emotions).forEach((key) => {
    emotions[key] = emotions[key] / userMessages.length;
  });

  // Send to backend for Gemini analysis
  const res = await fetch("http://localhost:5000/api/gemini", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ transcript, emoData: emotions }),
  });

  const data = await res.json();

  // Store in sessionStorage
  sessionStorage.setItem(
    "svaraInsights",
    JSON.stringify({
      transcript,
      emotions,
      analysis: data.response,
      timestamp: Date.now(),
    })
  );

  disconnect?.();
  navigate("/insights");
};

Insights page

The insights page (pages/insights.tsx) displays post-conversation analysis:
1

Load data from sessionStorage

const [data, setData] = useState<InsightData | null>(null);

useEffect(() => {
  const stored = sessionStorage.getItem("svaraInsights");
  if (stored) {
    setData(JSON.parse(stored));
  }
}, []);
2

Display AI-generated analysis

The Gemini-generated emotional analysis is shown in a prominent card at the top of the page.
3

Visualize emotion scores

const topEmotions = Object.entries(data.emotions)
  .sort(([, a], [, b]) => b - a)
  .slice(0, 5);

// Render as animated progress bars
topEmotions.map(([emotion, score]) => (
  <div className="h-3 bg-gray-200 rounded-full">
    <motion.div
      initial={{ width: 0 }}
      animate={{ width: `${score * 100}%` }}
      className="h-full bg-gradient-to-r from-[#E07155] to-[#E39682]"
    />
  </div>
))
4

Show conversation transcript

The full conversation is displayed with user and assistant messages differentiated by styling.
Insights data is stored in sessionStorage, which means it persists across page refreshes but is cleared when the browser tab is closed. For permanent storage, you would need to save to the backend.

Animation and UX

SvaraAI uses Framer Motion for smooth, polished animations:
  • Message entrance: Messages fade in and slide up when they appear
  • Connection overlay: The start call overlay fades in/out with scale animation
  • Emotion bars: Progress bars animate from 0 to their final value
  • Auto-scroll: Messages container automatically scrolls to show new messages
<motion.div
  initial={{ opacity: 0, y: 30, scale: 0.92 }}
  animate={{ opacity: 1, y: 0, scale: 1 }}
  exit={{ opacity: 0, y: -20, scale: 0.95 }}
  transition={{ type: "spring", stiffness: 200, damping: 25 }}
>
  {/* Message content */}
</motion.div>

State management

SvaraAI uses a simple state management approach:
  • Hume Voice SDK: Manages WebSocket connection and voice conversation state via React Context
  • React Router: Handles navigation state
  • sessionStorage: Persists insights data across page navigation
  • Component state: Local useState hooks for UI-specific state (loading, errors, etc.)
No additional state management library is needed due to the straightforward data flow.

Next steps

Backend architecture

Learn about the API routes and AI service integration

Integrations

Learn how to configure Hume AI and Gemini integrations

Build docs developers (and LLMs) love