Quick Start
Open the Web UI
Navigate to gitnexus.vercel.app in your browser.
Load a Repository
Choose one of three methods:
- Upload ZIP — drag and drop a
.zipof your codebase - Clone from GitHub — enter a GitHub repository URL
- Connect to local backend — use
gitnexus servefor CLI-indexed repos
Wait for Indexing
The pipeline will:
- Extract/clone files
- Parse code with Tree-sitter WASM
- Build knowledge graph in KuzuDB WASM
- Detect communities and processes
- Generate embeddings (optional, runs in background)
Loading Methods
Upload ZIP File
-
Prepare your codebase:
- Drag and drop the ZIP onto the Web UI
- Wait for extraction and indexing (~30 seconds for 500 files)
The ZIP file is processed entirely in your browser. Nothing is uploaded to a server.
Clone from GitHub
- Click “Clone from GitHub” on the landing page
- Enter a repository URL:
https://github.com/owner/repo - The Web UI will:
- Download the repo as a ZIP from GitHub’s API
- Extract and index in-browser
Connect to Local Backend
See Local Backend Mode for connecting to CLI-indexed repositories.Graph Explorer Interface
Main Canvas
The graph visualization shows your codebase as an interactive force-directed graph:- Nodes represent files, functions, classes, methods, and communities
- Edges represent relationships (calls, imports, extends, implements)
- Colors indicate community membership (functional areas)
- Click a node to see details and references
- Drag to rearrange layout
- Scroll to zoom in/out
- Pan by dragging the background
File Tree Panel
The left sidebar shows the repository folder structure:- Browse files and folders
- Click any file to see its symbols in the graph
- Search files by name
Code References Panel
When you click a node, the code references panel appears:- Symbol details — name, type, file path, line numbers
- Incoming references — what calls/imports this symbol
- Outgoing references — what this symbol calls/imports
- Process participation — execution flows this symbol is part of
- Jump to definition — view source code
Processes Panel
The processes panel (top toolbar) shows detected execution flows:- Click any process to see its step-by-step trace
- Visualize call chains from entry points to terminal nodes
- Understand cross-module flows
AI Chat Features
API key required: Configure your LLM provider in Settings (gear icon) before using chat.
Available Tools
-
query_code— Semantic search across the codebase -
run_cypher— Execute graph queries -
read_file— View file contents -
get_symbol_references— Get incoming/outgoing refs -
list_processes— List execution flows
Example Queries
Configuring LLM Provider
Choose Provider
Select from:
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude)
- Custom (OpenAI-compatible API)
Search
Use the search bar in the header to find code:- Text search — works immediately (BM25 full-text search)
- Semantic search — available after embeddings finish (shown in status bar)
- Combines BM25 and vector similarity
- Uses Reciprocal Rank Fusion (RRF) for ranking
- Groups results by execution flows and communities
Embedding Status
The status bar (bottom) shows embedding progress:- “Generating embeddings…” — transformers.js is running in the background
- “Embeddings ready” — semantic search is now available
- “WebGPU accelerated” — using your GPU for faster embeddings
- “WASM fallback” — CPU-only mode (slower)
Running Locally
You can also run the Web UI from source:http://localhost:5173.
Next Steps
Local Backend Mode
Connect to CLI-indexed repos for unlimited scale
CLI Overview
Install the CLI for persistent indexing and MCP