Skip to main content

Troubleshooting

This guide covers common issues you might encounter when using Local GPT and how to resolve them.

AI Provider Connection Issues

Cannot Connect to Local AI Server

Error messages like:
  • “Error while generating text: Failed to fetch”
  • “Network request failed”
  • Provider shows as unavailable in settings
1

Verify Server is Running

For Ollama:
# Check if Ollama is running
ollama list

# Start Ollama if needed
ollama serve
For LM Studio:
  • Open LM Studio
  • Click “Local Server” tab
  • Ensure server is started on port 8080 (or your configured port)
2

Check URL Configuration

Default URLs:
  • Ollama: http://localhost:11434
  • LM Studio: http://localhost:8080/v1
Verify in Settings → AI Providers → [Your Provider]
3

Test Connection Manually

Open a terminal and test:
# For Ollama
curl http://localhost:11434/api/tags

# For OpenAI-compatible (LM Studio, etc.)
curl http://localhost:8080/v1/models
If these fail, the server is not accessible.
4

Check Firewall

Ensure your firewall allows connections to:
  • Port 11434 (Ollama)
  • Port 8080 (LM Studio, or your configured port)
On macOS:
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --add /path/to/ollama
5

Verify CORS (if applicable)

If running Obsidian in a sandboxed environment, CORS may block requests.For LM Studio, enable CORS in server settings.

Cloud API Authentication Errors

  • “Invalid API key”
  • “Unauthorized” (401 error)
  • “Authentication failed”
1

Verify API Key

  1. Go to Settings → AI Providers → [Your Provider]
  2. Check that the API key is entered correctly (no extra spaces)
  3. Confirm the key is valid in your provider’s dashboard:
2

Check Key Permissions

Ensure your API key has permission to:
  • Access the model you’re trying to use
  • Make chat completions (not just embeddings)
3

Regenerate Key if Needed

If issues persist, generate a new API key in your provider’s dashboard and update Local GPT.

Model Not Found

  • “Model not found”
  • “Unknown model”
  • Model doesn’t appear in dropdown
1

Pull/Download the Model

For Ollama:
# List available models
ollama list

# Pull the model if missing
ollama pull llama3.2
For LM Studio:
  • Search for the model in the “Discover” tab
  • Click “Download”
  • Wait for download to complete
2

Verify Model Name

Model names must match exactly, including version tags:
  • llama3.2:latest
  • llama3.2 (missing tag for Ollama)
  • gpt-4-turbo
  • gpt4-turbo (incorrect hyphenation)
3

Refresh Model List

In Local GPT settings:
  1. Go to the provider configuration
  2. Open the model dropdown
  3. The list should refresh automatically
4

Check Provider URL

If models don’t load, verify the provider URL is correct (see “Cannot Connect” above).

Embedding Model Issues

Embedding Model Not Found

  • Enhanced Actions (RAG) don’t work
  • “Error processing related documents”
  • No embedding provider configured warning
1

Configure Embedding Provider

  1. Go to Settings → AI Providers
  2. Find Embedding AI Provider
  3. Select a provider from the dropdown
2

Download an Embedding Model

For Ollama (recommended):
# Pull a popular embedding model
ollama pull nomic-embed-text
Other good options:
  • mxbai-embed-large
  • all-minilm
3

Verify Model in Provider

After downloading:
  1. In AI Providers settings, edit your Ollama provider
  2. Ensure the embedding model is listed
  3. Save the provider configuration
4

Set as Embedding Provider

Return to Embedding AI Provider dropdown and select your configured provider.
Embedding models are separate from chat models. You need both for Enhanced Actions to work.

Enhanced Actions Not Working

  • RAG doesn’t include linked documents
  • No context from vault files
  • Status bar doesn’t show ”✨ Enhancing”
1

Verify Links Exist

Enhanced Actions require:
  • Wiki-links: [[Document Name]]
  • Markdown links: [text](path/to/file.md)
In your selection or prompt
2

Check Embedding Provider

Confirm in Settings → AI Providers → Embedding AI Provider that a provider is selected.
3

Check Context Limit

In Settings → Advanced Settings → RAG context:
  • Ensure it’s not set too low
  • Try “Cloud models” (32K) or higher
4

Verify File Types

Only these file types are processed:
  • .md (Markdown)
  • .pdf (PDF documents)
5

Look for Errors

Check the Developer Console:
  1. Press Ctrl+Shift+I (Windows/Linux) or Cmd+Option+I (Mac)
  2. Look for errors related to “RAG” or “embedding”

Performance Issues

Slow Response Times

Causes:
  • Model is too large for your hardware
  • Insufficient RAM/VRAM
  • CPU inference instead of GPU
Solutions:
1

Use a Smaller Model

Switch to a quantized or smaller model:
  • Instead of llama3.2:70b → try llama3.2:8b
  • Instead of mistral:7b → try mistral:7b-q4_0 (quantized)
2

Enable GPU Acceleration

For Ollama:
  • Ensure CUDA (NVIDIA) or ROCm (AMD) is installed
  • Verify GPU is detected: ollama ps
For LM Studio:
  • Settings → Hardware → Enable GPU offloading
  • Increase GPU layers if you have VRAM
3

Reduce Context

Lower Settings → Advanced → RAG context to reduce token count.
Causes:
  • Too many linked documents
  • Large PDF files
  • High context limit
Solutions:
1

Reduce Link Depth

Link fewer documents directly or reduce graph complexity.
2

Lower Context Limit

Change RAG context from “Advanced” to “Cloud models” or “Local models”.
3

Use Faster Embedding Model

Switch to a smaller, faster embedding model:
  • all-minilm (fast, decent quality)
  • nomic-embed-text (balanced)

High Memory Usage

  • Obsidian becomes slow or unresponsive
  • System memory fills up
  • App crashes
1

Clear IndexedDB Cache

The PDF cache may grow large. To clear:
  1. Open Developer Console (Ctrl+Shift+I / Cmd+Option+I)
  2. Go to Application tab
  3. Find IndexedDB → local-gpt-file-cache
  4. Right-click → Delete database
2

Reduce Concurrent Processing

Avoid triggering multiple AI requests simultaneously.
3

Use Smaller Context Limits

Lower limits reduce memory usage during RAG processing.

Quality Issues

Poor Response Quality

Possible causes:
  • Insufficient context
  • Wrong creativity setting
  • Model not suited for the task
Solutions:
1

Increase Context

Raise RAG context to include more information from linked documents.
2

Adjust Creativity

In Settings → Creativity:
  • Too creative → Lower to “Low” or “None”
  • Too rigid → Increase to “Medium” or “High”
3

Use a Better Model

Switch to a more capable model:
  • Local: Try llama3.2:70b or mixtral:8x7b
  • Cloud: Try gpt-4-turbo or claude-3-opus
4

Improve System Prompt

Edit your action’s system prompt to be more specific about what you need.
Possible causes:
  • Too much context overwhelms the model
  • High creativity setting
Solutions:
1

Reduce Context Limit

Lower RAG context to focus on most relevant information only.
2

Lower Creativity

Set Creativity to “None” or “Low” for more focused responses.
3

Be More Specific in Prompts

Add constraints like:
  • “Answer in 2-3 sentences”
  • “Focus only on X”
  • “Be concise”

Logging and Debugging

Enable Development Logging

To see detailed logs:
1

Set Environment Variable

Add to your build command:
NODE_ENV=development npm run dev
2

Open Developer Console

Press Ctrl+Shift+I (Windows/Linux) or Cmd+Option+I (Mac)
3

View Logs

Look for logs with emoji prefixes:
  • 🐛 Debug information
  • ℹ️ General info
  • ⚠️ Warnings
  • 🚫 Errors
  • ⏱️ Performance timing
  • 📊 Data tables

Understanding Log Output

// Processing starts
Starting RAG processing

// Context limit info
Passed contextLimit for context: 32000

// PDF extraction
⏱️ Extracting text from PDF: timer started
📊 Extracted text from PDF
  textLength: 45231
⏱️ Extracting text from PDF: 1234.56ms

// Final context length
📊 Total length of context: 28945

Common Error Messages

Cause: No main AI provider is configured.Solution: Go to Settings → AI Providers → Main AI Provider and select a provider.
Cause: PDF file is corrupted, password-protected, or unsupported.Solution:
  1. Try opening the PDF in another app
  2. If password-protected, remove protection
  3. If scanned images, use OCR to create searchable PDF
Cause: A translation key doesn’t exist in your language file.Impact: You’ll see English text for that string.Solution: This is not critical but can be reported as an issue or fixed via translation contribution.

How to Report Bugs

If you encounter an issue not covered here:
1

Gather Information

Collect:
  • Obsidian version
  • Local GPT version (Settings → Community Plugins → Local GPT)
  • Operating system
  • Error messages from Developer Console
  • Steps to reproduce
2

Check Existing Issues

Search GitHub Issues to see if it’s already reported.
3

Create a New Issue

If not found, open a new issue with:
  • Clear title: “[Component] Brief description”
  • Description: What happened vs. what you expected
  • Steps to reproduce: Numbered list
  • Logs: Relevant console output
  • Environment: OS, versions, AI provider used
4

Provide Minimal Example

If possible, create a minimal test case:
  • Small vault with just the files needed to reproduce
  • Specific action that triggers the issue
  • Sample text or prompts
Never include API keys or sensitive data when sharing logs or examples!

Source Code References

For developers debugging issues, relevant source files:

Logger

src/logger.ts - Logging system (lines 1-186)

Error Handling

src/main.ts:649-660 - Provider request error handling

RAG Errors

src/main.ts:803-817 - Context processing error handling

PDF Processing

src/processors/pdf.ts:38-41 - PDF extraction errors

Getting Help

If you’re still stuck:

GitHub Discussions

Ask questions and get community support

GitHub Issues

Report bugs and technical problems

Documentation

Browse the full documentation

Contributing

Help improve Local GPT

Build docs developers (and LLMs) love