Skip to main content
This is only helpful for self-hosted users. If you’re using Khoj Cloud, you can directly use any of the pre-configured AI models.

Overview

Khoj can use Google’s Gemini and Anthropic’s Claude family of AI models from Vertex AI on Google Cloud. Explore Anthropic and Gemini AI models available on Vertex AI’s Model Garden.

Setup

1

Enable Vertex AI

Follow these instructions to use models on GCP Vertex AI.
2

Create Service Account

Create Service Account credentials:
  1. Download the credentials keyfile in JSON format
  2. Base64 encode the credentials JSON keyfile. For example, by running:
base64 -i <service_account_credentials_keyfile.json>
3

Create AI Model API

Create a new AI Model API on your Khoj admin panel:
  • Name: Google Vertex (or whatever friendly name you prefer)
  • Api Key: base64 encoded json keyfile from step 2
  • Api Base Url: https://{MODEL_GCP_REGION}-aiplatform.googleapis.com/v1/projects/{YOUR_GCP_PROJECT_ID}
    • MODEL_GCP_REGION: A region the AI model is available in. For example, us-east5 works for Claude
    • YOUR_GCP_PROJECT_ID: Get your project ID from the Google Cloud dashboard
4

Create Chat Model

Create a new Chat Model on your Khoj admin panel:
  • Name: claude-3-7-sonnet@20250219 (any Claude or Gemini model on Vertex’s Model Garden should work)
  • Model Type: Anthropic or Google
  • Ai Model API: the Google Vertex AI Model API you created in step 3
  • Max prompt size: 60000 (replace with the max prompt size of your model)
  • Tokenizer: Do not set
5

Start Using

Select the chat model on your settings page and start a conversation.

Troubleshooting & Tips

Ensure your service account has the Vertex AI User role and that the API is enabled in your GCP project.
Double-check that the model you’re trying to use is supported in your selected region. Some Claude or Gemini models are restricted to specific zones like us-east5 or us-central1.
The “Max prompt size” should align with the limits defined in the model documentation. Exceeding it can silently fail or truncate inputs.
Before adding it to Khoj, you can verify that your key works by making a simple curl request to Vertex AI. This helps debug auth issues early.
For better security, consider using environment variables to manage sensitive keys and inject them at runtime during base64 encoding.
If you encounter any issues, the Khoj Discord is a great place to ask for help!

Build docs developers (and LLMs) love