wiki command generates comprehensive documentation for a repository by analyzing the knowledge graph with an LLM.
Usage
Arguments
Path to the repository to document. Defaults to current directory’s git root.
Options
Force full regeneration even if wiki is up to date.Flag:
-f, --forceLLM model name. Supports OpenAI, OpenRouter, or any OpenAI-compatible API.Flag:
--model <model>Examples:gpt-4o-mini(OpenAI)minimax/minimax-m2.5(OpenRouter)llama3(Local via Ollama)
LLM API base URL.Flag:
--base-url <url>Providers:- OpenAI:
https://api.openai.com/v1 - OpenRouter:
https://openrouter.ai/api/v1 - Ollama:
http://localhost:11434/v1
LLM API key. Saved to
~/.gitnexus/config.json for future use.Flag: --api-key <key>Alternatively, set environment variable:GITNEXUS_API_KEYOPENAI_API_KEY
Number of parallel LLM calls.Flag:
--concurrency <n>Publish the generated wiki as a public GitHub Gist for easy viewing and sharing.Flag:
--gistRequires: GitHub CLI (gh) installed and authenticatedExamples
Basic Usage
Generate wiki for the current repository:Custom Model
Local LLM (Ollama)
Force Regeneration
Publish as Gist
Save API Key
~/.gitnexus/config.json and reused for future runs.
Output
The command displays a progress bar with phase updates:Successful Generation
Already Up to Date
Generated Files
Wiki output is saved to.gitnexus/wiki/:
Viewing the Wiki
Openindex.html in your browser:
How It Works
Phase 1: Graph Analysis (0-20%)
- Loads the knowledge graph from
.gitnexus/db - Identifies functional modules (communities)
- Extracts entry points and execution flows (processes)
Phase 2: Module Generation (20-90%)
For each module:- Query the graph for symbols, relationships, and processes
- Call LLM with structured context
- Generate documentation (purpose, components, flows)
- Save markdown to
modules/{module}.md
Phase 3: Architecture Overview (90-100%)
Generates high-level architecture documentation by synthesizing module relationships.Phase 4: HTML Viewer Generation
Compiles all markdown files into a single-file HTML viewer with:- Sidebar navigation
- Full-text search
- Syntax highlighting
- Responsive design
LLM Configuration
Providers
| Provider | Base URL | Model Example |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | gpt-4o-mini |
| OpenRouter | https://openrouter.ai/api/v1 | minimax/minimax-m2.5 |
| Ollama | http://localhost:11434/v1 | llama3 |
| Custom | Your URL | Your model |
Default Model
minimax/minimax-m2.5 via OpenRouter — fast, cheap, and high-quality for documentation generation.
Cost Estimates
| Repository Size | Modules | Tokens | Cost (OpenRouter) |
|---|---|---|---|
| Small (< 100 files) | 5-10 | 50k | $0.05 |
| Medium (100-1,000 files) | 10-20 | 200k | $0.20 |
| Large (1,000-10,000 files) | 20-50 | 500k | $0.50 |
minimax/minimax-m2.5 is optimized for cost.
GitHub Gist Publishing
The--gist flag publishes the HTML viewer to a public GitHub Gist.
Requirements
- GitHub CLI (
gh) installed: https://cli.github.com - Authenticated:
gh auth login
Automatic Publishing
If--gist is omitted, you’ll be prompted after generation:
Output
Caching and Incremental Updates
Wiki generation tracks the last indexed commit. If the repository hasn’t changed:--force to regenerate:
Troubleshooting
”No GitNexus index found”
Rungitnexus analyze first:
“No LLM API key found”
Provide an API key:LLM API Error (401 Unauthorized)
Your API key is invalid. Reconfigure:Failed Modules
Gist Publishing Fails
Ensure GitHub CLI is authenticated:Performance
| Repository Size | Modules | Time (concurrency=3) |
|---|---|---|
| Small (< 100 files) | 5-10 | 30-60s |
| Medium (100-1,000 files) | 10-20 | 1-3 min |
| Large (1,000-10,000 files) | 20-50 | 3-10 min |
See Also
- gitnexus analyze — Index repository first
- Web UI Overview — Alternative documentation viewer