Why Self-Host?
Complete Privacy
Your data never leaves your network. Use Khoj entirely offline if needed.
Full Customization
Choose any AI model, customize features, and configure everything to your needs.
No Limits
Unlimited usage with no rate limits or subscription costs.
Local AI Models
Run completely offline with local models like Llama, Qwen, or Mistral.
First time self-hosting? Restart your Khoj server after the first run to ensure all settings are applied correctly.
Choose Your Installation Method
- Docker (Recommended)
- Pip Install
Docker provides the easiest setup with all dependencies included. Perfect for most users.
Or download manually from GitHubWait for the message:
Prerequisites
MacOS
MacOS
Install Docker Desktop or use Homebrew:Option 1: Docker DesktopDownload from Docker’s websiteOption 2: Homebrew
Windows
Windows
Linux
Linux
Installation Steps
Download Docker Compose File
Configure Environment Variables
Open
docker-compose.yml and set these essential variables:docker-compose.yml
Start Khoj
🌖 Khoj is ready to engageFirst startup takes 2-5 minutes to download models and initialize the database.
Docker Architecture
The Docker setup includes multiple containers:| Container | Purpose | Port |
|---|---|---|
| server | Main Khoj application | 42110 |
| database | PostgreSQL with pgvector | 5432 |
| sandbox | Python code execution (Terrarium) | 8080 |
| search | SearxNG web search engine | 8080 |
| computer | Optional VNC desktop for automation | 5900 |
Advanced Docker Configuration
Use Local Ollama Models
Use Local Ollama Models
To use Ollama running on your host machine:
- Start Ollama on your host:
ollama serve - Uncomment these lines in
docker-compose.yml:
- Restart:
docker-compose restart
Enable Remote Access
Enable Remote Access
Disable Telemetry
Disable Telemetry
Khoj collects anonymous usage data to improve the product. To opt out:
Configure Chat Models
After installation, configure which AI models to use.Access Admin Panel
- Navigate to:
http://localhost:42110/server/admin - Login with your admin credentials
Add Chat Models
Create AI Model API Configuration
Go to AI Model API → Add AI Model API
- OpenAI
- Anthropic
- Google Gemini
- Local (Ollama)
- Name:
OpenAI - API Key: Your OpenAI API key
- API Base URL: Leave empty (or set for proxy/Ollama)

Create Chat Model
Go to Chat Model → Add Chat Model
- OpenAI
- Anthropic
- Google Gemini
- Local Model
- Chat Model:
gpt-4oorgpt-4o-mini - Model Type:
OpenAI - AI Model API: Select your OpenAI config
- Vision Enabled: ✓ (for image support)
- Max Prompt Size:
128000(optional)

Local Model Requirements
For offline AI models:| Component | Minimum | Recommended |
|---|---|---|
| RAM | 8GB | 16GB+ |
| VRAM | - | 8GB+ (GPU) |
| Storage | 5GB | 20GB+ |
| CPU | 4 cores | 8+ cores |
NVIDIA/AMD GPUs or Apple Silicon significantly speed up local model inference.
Sync Your Data
Connect your documents to Khoj:Desktop App
Auto-sync local folders continuously
Obsidian Plugin
Sync your Obsidian vault seamlessly
Emacs Package
Native integration for Emacs users
Web Upload
Drag and drop files in the web interface
Configure Client Connection
Set your self-hosted server URL in client settings:Upgrade Khoj
- Docker
- Pip
Troubleshooting
Dependency Conflicts (Pip)
Dependency Conflicts (Pip)
Problem: Conflicting Python package versionsSolution: Use pipx or virtual environment
Docker 'Killed' Error
Docker 'Killed' Error
Problem: Container exits with “Killed” messageSolution: Increase Docker memory limit
- Docker Desktop: Settings → Resources → Memory → 4GB minimum
CSRF Error on Admin Panel
CSRF Error on Admin Panel
Problem: CSRF verification failedSolutions:
- Use
localhostinstead of127.0.0.1 - Set
KHOJ_DOMAINin environment variables - Clear browser cookies and cache
Tokenizer Build Fails
Tokenizer Build Fails
Problem: Cannot build
tokenizers packageSolution: Install Rust compilerOllama Not Connecting
Ollama Not Connecting
Problem: Cannot connect to local OllamaSolutions:
- Ensure Ollama is running:
ollama serve - Check URL is correct:
http://localhost:11434/v1/ - For Docker, use:
http://host.docker.internal:11434/v1/
Uninstall
- Docker
- Pip
Remove all Khoj containers, volumes, and data:
Next Steps
Remote Access
Access Khoj securely from anywhere
Use LiteLLM
Connect to 100+ AI models via proxy
Admin Panel
Advanced server configuration
Tailscale Setup
Secure private network access
Getting Help
Run into issues? We’re here to help:- Discord: discord.gg/BDgyabRM6e - Active community support
- GitHub Issues: github.com/khoj-ai/khoj/issues
- Documentation: Browse guides in the Advanced section
