Prerequisites
Before installing Local GPT, ensure you have:- Obsidian version 0.15.0 or higher
- An AI provider configured (Ollama, OpenAI, or OpenAI-compatible API)
Local GPT requires the AI Providers plugin as a dependency. We’ll install this in the steps below.
Installation Steps
Step 1: Install AI Providers Plugin
The AI Providers plugin acts as a central hub for managing AI connections in Obsidian.Step 2: Install Local GPT Plugin
- Obsidian Plugin Store (Recommended)
- BRAT (Beta Testing)
Step 3: Configure an AI Provider
With both plugins installed, you need to configure at least one AI provider.Option 1: Ollama (Local, Recommended)
Option 1: Ollama (Local, Recommended)
Ollama runs AI models completely offline on your machine.
Install Ollama
Download and install Ollama from ollama.ai
Configure in Obsidian
- Open Settings → AI Providers
- Click Add Provider → Ollama
- Set the URL to
http://localhost:11434(default) - Select your model (e.g.,
gemma2:latest) - Save the provider
Option 2: OpenAI API
Option 2: OpenAI API
Use OpenAI’s cloud models like GPT-4.
Get API Key
Sign up at platform.openai.com and create an API key
Configure in Obsidian
- Open Settings → AI Providers
- Click Add Provider → OpenAI
- Enter your API key
- Select a model (e.g.,
gpt-4o-mini) - Save the provider
Option 3: OpenAI-Compatible API
Option 3: OpenAI-Compatible API
Connect to any OpenAI-compatible endpoint (LM Studio, LocalAI, etc.).
Start Your Server
Ensure your OpenAI-compatible server is running (e.g., LM Studio, text-generation-webui with OpenAI extension)
Configure in Obsidian
- Open Settings → AI Providers
- Click Add Provider → OpenAI Compatible
- Enter the base URL (e.g.,
http://localhost:8080/v1) - Add API key if required
- Select or enter the model name
- Save the provider
Optional: Configure RAG (Enhanced Actions)
To enable context-aware responses using RAG, you need an embedding model:RAG features work by analyzing your linked notes, backlinks, and PDFs to provide more contextual AI responses. Learn more in the RAG System guide.
Optional: Configure Hotkeys
Set up keyboard shortcuts for quick access to Local GPT features:Verify Installation
To confirm everything is working:If you see AI-generated content appear in your note, installation is complete!
Troubleshooting
No AI Providers showing up
No AI Providers showing up
Make sure the AI Providers plugin is installed and enabled. Restart Obsidian if needed.
Ollama connection failed
Ollama connection failed
- Verify Ollama is running:
ollama listin terminal - Check the URL is
http://localhost:11434 - Try pulling the model again:
ollama pull gemma2
Actions not appearing in context menu
Actions not appearing in context menu
Embedding provider errors
Embedding provider errors
- Embedding providers are optional for basic functionality
- Ensure you’ve pulled an embedding model:
ollama pull nomic-embed-text - Verify the embedding provider is configured separately from the main provider
Next Steps
Quickstart Guide
Learn how to use Local GPT effectively
AI Providers Setup
Advanced provider configuration options
Create Custom Actions
Build actions tailored to your workflow
Action Palette
Master the powerful Action Palette feature