Skip to main content

Prerequisites

Before installing Local GPT, ensure you have:
  • Obsidian version 0.15.0 or higher
  • An AI provider configured (Ollama, OpenAI, or OpenAI-compatible API)
Local GPT requires the AI Providers plugin as a dependency. We’ll install this in the steps below.

Installation Steps

Step 1: Install AI Providers Plugin

The AI Providers plugin acts as a central hub for managing AI connections in Obsidian.
1

Open Community Plugins

In Obsidian, open SettingsCommunity pluginsBrowse
2

Search for AI Providers

Search for AI Providers in the plugin browser
3

Install and Enable

Click Install, then Enable the plugin
You can also install AI Providers directly from the Obsidian plugin store: AI Providers

Step 2: Install Local GPT Plugin

Step 3: Configure an AI Provider

With both plugins installed, you need to configure at least one AI provider.
Use OpenAI’s cloud models like GPT-4.
1

Get API Key

Sign up at platform.openai.com and create an API key
2

Configure in Obsidian

  1. Open SettingsAI Providers
  2. Click Add ProviderOpenAI
  3. Enter your API key
  4. Select a model (e.g., gpt-4o-mini)
  5. Save the provider
3

Set as Main Provider

In SettingsLocal GPT, select your OpenAI provider as the Main Provider
Using OpenAI’s API sends your data to their servers and incurs usage costs.
Connect to any OpenAI-compatible endpoint (LM Studio, LocalAI, etc.).
1

Start Your Server

Ensure your OpenAI-compatible server is running (e.g., LM Studio, text-generation-webui with OpenAI extension)
2

Configure in Obsidian

  1. Open SettingsAI Providers
  2. Click Add ProviderOpenAI Compatible
  3. Enter the base URL (e.g., http://localhost:8080/v1)
  4. Add API key if required
  5. Select or enter the model name
  6. Save the provider
3

Set as Main Provider

In SettingsLocal GPT, select your provider as the Main Provider

Optional: Configure RAG (Enhanced Actions)

To enable context-aware responses using RAG, you need an embedding model:
1

Install an Embedding Model

For Ollama users, pull an embedding model:
# For English content
ollama pull nomic-embed-text

# For multilingual content
ollama pull bge-m3
2

Configure Embedding Provider

  1. In SettingsAI Providers, add a new Ollama provider
  2. Select your embedding model (e.g., nomic-embed-text:latest)
  3. In SettingsLocal GPT, set this as the Embedding Provider
RAG features work by analyzing your linked notes, backlinks, and PDFs to provide more contextual AI responses. Learn more in the RAG System guide.

Optional: Configure Hotkeys

Set up keyboard shortcuts for quick access to Local GPT features:
1

Open Hotkey Settings

Go to SettingsHotkeys
2

Search for Local GPT

Type Local in the search bar to filter Local GPT commands
3

Assign Shortcuts

  • Local GPT: Show context menu - e.g., Cmd+M or Ctrl+M
  • Local GPT: Action Palette - e.g., Cmd+J or Ctrl+J

Verify Installation

To confirm everything is working:
1

Open a Note

Create or open any note in your vault
2

Select Text

Highlight a sentence or paragraph
3

Open Context Menu

Right-click the selection (or use your hotkey) and look for Local GPT actions
4

Test an Action

Try the Summarize action to verify your AI provider is working
If you see AI-generated content appear in your note, installation is complete!

Troubleshooting

Make sure the AI Providers plugin is installed and enabled. Restart Obsidian if needed.
  • Verify Ollama is running: ollama list in terminal
  • Check the URL is http://localhost:11434
  • Try pulling the model again: ollama pull gemma2
  • Ensure Local GPT plugin is enabled
  • Check that you have at least one Main Provider configured
  • Try reloading Obsidian
  • Embedding providers are optional for basic functionality
  • Ensure you’ve pulled an embedding model: ollama pull nomic-embed-text
  • Verify the embedding provider is configured separately from the main provider
For more help, see the Troubleshooting guide.

Next Steps

Quickstart Guide

Learn how to use Local GPT effectively

AI Providers Setup

Advanced provider configuration options

Create Custom Actions

Build actions tailored to your workflow

Action Palette

Master the powerful Action Palette feature

Build docs developers (and LLMs) love