Troubleshooting
This guide covers common issues you might encounter when using Local GPT and how to resolve them.AI Provider Connection Issues
Cannot Connect to Local AI Server
Symptom
Symptom
- “Error while generating text: Failed to fetch”
- “Network request failed”
- Provider shows as unavailable in settings
Causes & Solutions
Causes & Solutions
Verify Server is Running
- Open LM Studio
- Click “Local Server” tab
- Ensure server is started on port 8080 (or your configured port)
Check URL Configuration
- Ollama:
http://localhost:11434 - LM Studio:
http://localhost:8080/v1
Check Firewall
- Port 11434 (Ollama)
- Port 8080 (LM Studio, or your configured port)
Cloud API Authentication Errors
Symptom
Symptom
- “Invalid API key”
- “Unauthorized” (401 error)
- “Authentication failed”
Solutions
Solutions
Verify API Key
- Go to Settings → AI Providers → [Your Provider]
- Check that the API key is entered correctly (no extra spaces)
- Confirm the key is valid in your provider’s dashboard:
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com/settings/keys
Check Key Permissions
- Access the model you’re trying to use
- Make chat completions (not just embeddings)
Model Not Found
Symptom
Symptom
- “Model not found”
- “Unknown model”
- Model doesn’t appear in dropdown
Solutions
Solutions
Pull/Download the Model
- Search for the model in the “Discover” tab
- Click “Download”
- Wait for download to complete
Verify Model Name
- ✅
llama3.2:latest - ❌
llama3.2(missing tag for Ollama) - ✅
gpt-4-turbo - ❌
gpt4-turbo(incorrect hyphenation)
Refresh Model List
- Go to the provider configuration
- Open the model dropdown
- The list should refresh automatically
Embedding Model Issues
Embedding Model Not Found
Symptom
Symptom
- Enhanced Actions (RAG) don’t work
- “Error processing related documents”
- No embedding provider configured warning
Solutions
Solutions
Configure Embedding Provider
- Go to Settings → AI Providers
- Find Embedding AI Provider
- Select a provider from the dropdown
Download an Embedding Model
mxbai-embed-largeall-minilm
Verify Model in Provider
- In AI Providers settings, edit your Ollama provider
- Ensure the embedding model is listed
- Save the provider configuration
Enhanced Actions Not Working
Symptom
Symptom
- RAG doesn’t include linked documents
- No context from vault files
- Status bar doesn’t show ”✨ Enhancing”
Checklist
Checklist
Verify Links Exist
- Wiki-links:
[[Document Name]] - Markdown links:
[text](path/to/file.md)
Check Embedding Provider
Check Context Limit
- Ensure it’s not set too low
- Try “Cloud models” (32K) or higher
Performance Issues
Slow Response Times
Local Model is Slow
Local Model is Slow
- Model is too large for your hardware
- Insufficient RAM/VRAM
- CPU inference instead of GPU
Use a Smaller Model
- Instead of
llama3.2:70b→ tryllama3.2:8b - Instead of
mistral:7b→ trymistral:7b-q4_0(quantized)
Enable GPU Acceleration
- Ensure CUDA (NVIDIA) or ROCm (AMD) is installed
- Verify GPU is detected:
ollama ps
- Settings → Hardware → Enable GPU offloading
- Increase GPU layers if you have VRAM
Enhanced Actions Take Too Long
Enhanced Actions Take Too Long
- Too many linked documents
- Large PDF files
- High context limit
High Memory Usage
Symptom
Symptom
- Obsidian becomes slow or unresponsive
- System memory fills up
- App crashes
Solutions
Solutions
Clear IndexedDB Cache
- Open Developer Console (
Ctrl+Shift+I/Cmd+Option+I) - Go to Application tab
- Find IndexedDB → local-gpt-file-cache
- Right-click → Delete database
Quality Issues
Poor Response Quality
Responses are Generic or Off-Topic
Responses are Generic or Off-Topic
- Insufficient context
- Wrong creativity setting
- Model not suited for the task
Adjust Creativity
- Too creative → Lower to “Low” or “None”
- Too rigid → Increase to “Medium” or “High”
Use a Better Model
- Local: Try
llama3.2:70bormixtral:8x7b - Cloud: Try
gpt-4-turboorclaude-3-opus
Responses are Too Verbose or Unfocused
Responses are Too Verbose or Unfocused
- Too much context overwhelms the model
- High creativity setting
Logging and Debugging
Enable Development Logging
To see detailed logs:Understanding Log Output
Common Error Messages
Error: No AI provider found
Error: No AI provider found
Error processing related documents: [message]
Error processing related documents: [message]
Failed to extract text from PDF: [message]
Failed to extract text from PDF: [message]
- Try opening the PDF in another app
- If password-protected, remove protection
- If scanned images, use OCR to create searchable PDF
Translation missing: [key]
Translation missing: [key]
How to Report Bugs
If you encounter an issue not covered here:Gather Information
- Obsidian version
- Local GPT version (Settings → Community Plugins → Local GPT)
- Operating system
- Error messages from Developer Console
- Steps to reproduce
Check Existing Issues
Create a New Issue
- Clear title: “[Component] Brief description”
- Description: What happened vs. what you expected
- Steps to reproduce: Numbered list
- Logs: Relevant console output
- Environment: OS, versions, AI provider used
Source Code References
For developers debugging issues, relevant source files:Logger
src/logger.ts - Logging system (lines 1-186)Error Handling
src/main.ts:649-660 - Provider request error handlingRAG Errors
src/main.ts:803-817 - Context processing error handlingPDF Processing
src/processors/pdf.ts:38-41 - PDF extraction errors