Skip to main content
Self-hosting Khoj gives you complete control over your data and AI infrastructure. Run it on your laptop, home server, or private cloud.

Why Self-Host?

Complete Privacy

Your data never leaves your network. Use Khoj entirely offline if needed.

Full Customization

Choose any AI model, customize features, and configure everything to your needs.

No Limits

Unlimited usage with no rate limits or subscription costs.

Local AI Models

Run completely offline with local models like Llama, Qwen, or Mistral.
First time self-hosting? Restart your Khoj server after the first run to ensure all settings are applied correctly.

Choose Your Installation Method

Configure Chat Models

After installation, configure which AI models to use.

Access Admin Panel

  1. Navigate to: http://localhost:42110/server/admin
  2. Login with your admin credentials
Use localhost, not 127.0.0.1, to avoid CSRF errors.

Add Chat Models

1

Create AI Model API Configuration

Go to AI Model APIAdd AI Model API
  • Name: OpenAI
  • API Key: Your OpenAI API key
  • API Base URL: Leave empty (or set for proxy/Ollama)
OpenAI configuration
2

Create Chat Model

Go to Chat ModelAdd Chat Model
  • Chat Model: gpt-4o or gpt-4o-mini
  • Model Type: OpenAI
  • AI Model API: Select your OpenAI config
  • Vision Enabled: ✓ (for image support)
  • Max Prompt Size: 128000 (optional)
Chat model configuration
3

Set Default Models

Go to Server Chat Settings → Select your default models:
  • Default: Used for general chat
  • Advanced: Used for complex reasoning and research
Save changes.

Local Model Requirements

For offline AI models:
ComponentMinimumRecommended
RAM8GB16GB+
VRAM-8GB+ (GPU)
Storage5GB20GB+
CPU4 cores8+ cores
NVIDIA/AMD GPUs or Apple Silicon significantly speed up local model inference.

Sync Your Data

Connect your documents to Khoj:

Desktop App

Auto-sync local folders continuously

Obsidian Plugin

Sync your Obsidian vault seamlessly

Emacs Package

Native integration for Emacs users

Web Upload

Drag and drop files in the web interface

Configure Client Connection

Set your self-hosted server URL in client settings:
http://localhost:42110
Or your server’s IP for remote access:
http://192.168.1.100:42110

Upgrade Khoj

cd ~/.khoj
docker-compose down
docker-compose pull
docker-compose up

Troubleshooting

Problem: Conflicting Python package versionsSolution: Use pipx or virtual environment
# Install pipx
python -m pip install pipx
python -m pipx ensurepath

# Install Khoj with pipx
pipx install khoj
Problem: Container exits with “Killed” messageSolution: Increase Docker memory limit
  • Docker Desktop: Settings → Resources → Memory → 4GB minimum
Problem: CSRF verification failedSolutions:
  • Use localhost instead of 127.0.0.1
  • Set KHOJ_DOMAIN in environment variables
  • Clear browser cookies and cache
Problem: Cannot build tokenizers packageSolution: Install Rust compiler
# MacOS
brew install rustup
rustup-init
source ~/.cargo/env

# Linux
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Problem: Cannot connect to local OllamaSolutions:
  • Ensure Ollama is running: ollama serve
  • Check URL is correct: http://localhost:11434/v1/
  • For Docker, use: http://host.docker.internal:11434/v1/

Uninstall

Remove all Khoj containers, volumes, and data:
cd ~/.khoj
docker-compose down --volumes
rm -rf ~/.khoj

Next Steps

Remote Access

Access Khoj securely from anywhere

Use LiteLLM

Connect to 100+ AI models via proxy

Admin Panel

Advanced server configuration

Tailscale Setup

Secure private network access

Getting Help

Run into issues? We’re here to help:

Build docs developers (and LLMs) love