Overview
Local GPT requires the AI Providers plugin to connect to AI models. This plugin acts as a central hub for managing all AI provider configurations in Obsidian.The AI Providers plugin must be installed separately from the Obsidian community plugin store.
Installing AI Providers Plugin
- Open Settings → Community plugins
- Search for “AI Providers”
- Install and enable the plugin
- Visit the AI Providers documentation for detailed setup instructions
Provider Types
Local GPT uses three types of AI providers:Main Provider
The primary AI model used for text generation, completions, and general assistant actions. Location in settings:src/LocalGPTSettingTab.ts:104-119
Embedding Provider
Used for Enhanced Actions (RAG) to understand and retrieve relevant context from your vault. Recommended models:- English:
nomic-embed-text(fastest) - Multilingual:
bge-m3(slower, but more accurate for other languages)
src/LocalGPTSettingTab.ts:121-136
The embedding provider enables Local GPT to search through links, backlinks, and even PDF files to provide relevant context for your actions.
Vision Provider
Enables AI to analyze images embedded in your notes. Recommended models:- Ollama:
bakllava,llava - OpenAI:
gpt-4-vision-preview,gpt-4o
src/LocalGPTSettingTab.ts:138-153
Configuring Ollama Provider
Step 1: Install Ollama
Download and install Ollama from ollama.aiStep 2: Pull Models
Step 3: Configure in AI Providers
- Open Settings → AI Providers
- Click Add Provider
- Select Ollama from the provider type dropdown
- Configure the endpoint (default:
http://localhost:11434) - Select your models for each capability
Step 4: Set Providers in Local GPT
- Open Settings → Local GPT
- Select your Ollama provider from the Main Provider dropdown
- Select your embedding model from the Embedding Provider dropdown
- (Optional) Select your vision model from the Vision Provider dropdown
Configuring OpenAI-Compatible Providers
Local GPT works with any OpenAI-compatible API endpoint through the AI Providers plugin.Supported Services
- OpenAI
- Azure OpenAI
- Anthropic Claude
- Google Gemini
- Groq
- Together AI
- Any custom OpenAI-compatible endpoint
Configuration Steps
- Open Settings → AI Providers
- Click Add Provider
- Select the provider type or choose Custom OpenAI-compatible
- Enter your API credentials
- Configure endpoint URL (if custom)
- Select available models
Example: OpenAI Configuration
- Provider Type: OpenAI
- API Key: Your OpenAI API key
- Models available:
- Main:
gpt-4,gpt-3.5-turbo - Embedding:
text-embedding-3-small,text-embedding-ada-002 - Vision:
gpt-4-vision-preview,gpt-4o
- Main:
Model Selection Best Practices
For Main Provider
- Local development: Use smaller, faster models like
llama3.2ormistral - Production use: Use larger models like
gpt-4orclaude-3-opusfor better quality - Speed vs. quality: Balance based on your use case
For Embedding Provider
- Must match your content language: Use multilingual models like
bge-m3for non-English content - Vault size matters: Larger vaults benefit from better embedding models
- Consistency: Use the same embedding model for all indexing to maintain quality
For Vision Provider
- Image quality: Higher resolution images require more capable models
- Speed: Vision models are generally slower; use only when needed
- Cost: Cloud vision APIs can be expensive; consider local alternatives like
llava
Troubleshooting
Provider Not Appearing in Dropdown
- Ensure AI Providers plugin is installed and enabled
- Restart Obsidian
- Check that the provider is properly configured in AI Providers settings
Connection Errors
- Ollama: Verify Ollama is running (
ollama listin terminal) - Cloud APIs: Check API key validity and network connection
- Custom endpoints: Verify URL format and accessibility
Model Not Available
- Ollama: Pull the model first using
ollama pull <model-name> - Cloud APIs: Ensure you have access to the model in your API account
- Check AI Providers settings: Model must be configured in the provider settings
Advanced Configuration
Temperature Settings
You can set default creativity levels in Local GPT settings:- None (0): Deterministic, consistent outputs
- Low (0.2): Slightly varied, good for factual tasks
- Medium (0.5): Balanced creativity and consistency
- High (1.0): Maximum creativity, best for creative writing
Context Limits
Configure how much context to retrieve for Enhanced Actions:- local: Optimized for local models with limited context windows
- cloud: Balanced for cloud APIs
- advanced: More context for capable models
- max: Maximum context retrieval
Related Resources
Creating Custom Actions
Learn how to create custom actions with specific prompts
Prompt Templating
Use template keywords for dynamic prompts