Skip to main content

Overview

Local GPT requires the AI Providers plugin to connect to AI models. This plugin acts as a central hub for managing all AI provider configurations in Obsidian.
The AI Providers plugin must be installed separately from the Obsidian community plugin store.

Installing AI Providers Plugin

  1. Open SettingsCommunity plugins
  2. Search for “AI Providers”
  3. Install and enable the plugin
  4. Visit the AI Providers documentation for detailed setup instructions

Provider Types

Local GPT uses three types of AI providers:

Main Provider

The primary AI model used for text generation, completions, and general assistant actions. Location in settings: src/LocalGPTSettingTab.ts:104-119
new Setting(containerEl)
  .setHeading()
  .setName(I18n.t("settings.mainProvider"))
  .addDropdown((dropdown) =>
    dropdown
      .addOptions(providers)
      .setValue(String(this.plugin.settings.aiProviders.main))
      .onChange(async (value) => {
        this.plugin.settings.aiProviders.main = value;
        await this.plugin.saveSettings();
      })
  );

Embedding Provider

Used for Enhanced Actions (RAG) to understand and retrieve relevant context from your vault. Recommended models:
  • English: nomic-embed-text (fastest)
  • Multilingual: bge-m3 (slower, but more accurate for other languages)
Location in settings: src/LocalGPTSettingTab.ts:121-136
The embedding provider enables Local GPT to search through links, backlinks, and even PDF files to provide relevant context for your actions.

Vision Provider

Enables AI to analyze images embedded in your notes. Recommended models:
  • Ollama: bakllava, llava
  • OpenAI: gpt-4-vision-preview, gpt-4o
Location in settings: src/LocalGPTSettingTab.ts:138-153

Configuring Ollama Provider

Step 1: Install Ollama

Download and install Ollama from ollama.ai

Step 2: Pull Models

# Main model for text generation
ollama pull llama3.2

# Embedding model for RAG (English)
ollama pull nomic-embed-text

# Or for multilingual support
ollama pull bge-m3

# Vision model (optional)
ollama pull bakllava

Step 3: Configure in AI Providers

  1. Open SettingsAI Providers
  2. Click Add Provider
  3. Select Ollama from the provider type dropdown
  4. Configure the endpoint (default: http://localhost:11434)
  5. Select your models for each capability

Step 4: Set Providers in Local GPT

  1. Open SettingsLocal GPT
  2. Select your Ollama provider from the Main Provider dropdown
  3. Select your embedding model from the Embedding Provider dropdown
  4. (Optional) Select your vision model from the Vision Provider dropdown
Use the largest model with the largest context window for better RAG results. Larger context windows allow more relevant information to be included.

Configuring OpenAI-Compatible Providers

Local GPT works with any OpenAI-compatible API endpoint through the AI Providers plugin.

Supported Services

  • OpenAI
  • Azure OpenAI
  • Anthropic Claude
  • Google Gemini
  • Groq
  • Together AI
  • Any custom OpenAI-compatible endpoint

Configuration Steps

  1. Open SettingsAI Providers
  2. Click Add Provider
  3. Select the provider type or choose Custom OpenAI-compatible
  4. Enter your API credentials
  5. Configure endpoint URL (if custom)
  6. Select available models

Example: OpenAI Configuration

  1. Provider Type: OpenAI
  2. API Key: Your OpenAI API key
  3. Models available:
    • Main: gpt-4, gpt-3.5-turbo
    • Embedding: text-embedding-3-small, text-embedding-ada-002
    • Vision: gpt-4-vision-preview, gpt-4o
API keys are stored locally in Obsidian. Never commit your vault’s .obsidian folder to public repositories if it contains API credentials.

Model Selection Best Practices

For Main Provider

  • Local development: Use smaller, faster models like llama3.2 or mistral
  • Production use: Use larger models like gpt-4 or claude-3-opus for better quality
  • Speed vs. quality: Balance based on your use case

For Embedding Provider

  • Must match your content language: Use multilingual models like bge-m3 for non-English content
  • Vault size matters: Larger vaults benefit from better embedding models
  • Consistency: Use the same embedding model for all indexing to maintain quality

For Vision Provider

  • Image quality: Higher resolution images require more capable models
  • Speed: Vision models are generally slower; use only when needed
  • Cost: Cloud vision APIs can be expensive; consider local alternatives like llava

Troubleshooting

Provider Not Appearing in Dropdown

  1. Ensure AI Providers plugin is installed and enabled
  2. Restart Obsidian
  3. Check that the provider is properly configured in AI Providers settings

Connection Errors

  • Ollama: Verify Ollama is running (ollama list in terminal)
  • Cloud APIs: Check API key validity and network connection
  • Custom endpoints: Verify URL format and accessibility

Model Not Available

  • Ollama: Pull the model first using ollama pull <model-name>
  • Cloud APIs: Ensure you have access to the model in your API account
  • Check AI Providers settings: Model must be configured in the provider settings

Advanced Configuration

Temperature Settings

You can set default creativity levels in Local GPT settings:
// From src/defaultSettings.ts:49-62
export const CREATIVITY: { [index: string]: any } = {
  "": {
    temperature: 0,
  },
  low: {
    temperature: 0.2,
  },
  medium: {
    temperature: 0.5,
  },
  high: {
    temperature: 1,
  },
};
  • None (0): Deterministic, consistent outputs
  • Low (0.2): Slightly varied, good for factual tasks
  • Medium (0.5): Balanced creativity and consistency
  • High (1.0): Maximum creativity, best for creative writing

Context Limits

Configure how much context to retrieve for Enhanced Actions:
// From src/interfaces.ts:8-13
defaults: {
  creativity: string;
  contextLimit?: string; // 'local' | 'cloud' | 'advanced' | 'max'
}
  • local: Optimized for local models with limited context windows
  • cloud: Balanced for cloud APIs
  • advanced: More context for capable models
  • max: Maximum context retrieval

Creating Custom Actions

Learn how to create custom actions with specific prompts

Prompt Templating

Use template keywords for dynamic prompts

Build docs developers (and LLMs) love