Skip to main content

Get your first AI action running

This guide walks you through using Local GPT for the first time, from installation to running your first AI-powered action.
Already installed? Skip to Using the Context Menu.

Prerequisites

Before you begin, you’ll need:
  • Obsidian 0.15.0 or higher
  • Local GPT and AI Providers plugins installed (Installation guide)
  • At least one AI provider configured (Ollama, OpenAI, etc.)

Step 1: Install and Configure

1

Install Required Plugins

Install both the AI Providers and Local GPT plugins from Obsidian’s Community Plugins.See the Installation guide for detailed instructions.
2

Set Up an AI Provider

For local/offline use (Recommended):
  1. Install Ollama
  2. Pull a model:
    ollama pull gemma2
    
  3. In Obsidian, go to SettingsAI ProvidersAdd ProviderOllama
  4. Set URL: http://localhost:11434
  5. Select model: gemma2:latest
For cloud use:Configure OpenAI or another provider in SettingsAI Providers.
3

Configure Local GPT

  1. Open SettingsLocal GPT
  2. Under Main Provider, select the AI provider you just configured
  3. (Optional) Configure Vision Provider and Embedding Provider
For the best experience, assign keyboard shortcuts:
1

Open Hotkeys

Go to SettingsHotkeys
2

Assign Context Menu Hotkey

Search for Local GPT: Show context menu and assign a shortcut like:
  • Mac: Cmd + M
  • Windows/Linux: Ctrl + M
3

Assign Action Palette Hotkey

Search for Local GPT: Action Palette and assign a shortcut like:
  • Mac: Cmd + J
  • Windows/Linux: Ctrl + J
The Action Palette is perfect for one-off prompts, while the context menu is ideal for repeated actions.

Step 3: Run Your First Action

Using the Context Menu

The context menu provides quick access to your saved actions:
1

Select Text

In any note, highlight some text you want to work with. For example:
Local GPT is a plugin for Obsidian that lets you use AI
2

Open Context Menu

Right-click the selected text (or use your hotkey Cmd/Ctrl + M)
3

Choose an Action

Select an action from the menu:
  • Continue writing - Expand your thoughts
  • Summarize - Create a concise summary
  • Fix spelling and grammar - Proofread the text
  • Find action items - Extract tasks
  • General help - Use text as a prompt
4

See Results

Wait for the AI to process (you’ll see a spinner). The result appears below your text:
Local GPT is a plugin for Obsidian that lets you use AI

Local GPT is a plugin for Obsidian that integrates AI-powered 
language models directly into your note-taking workflow. It 
supports both local models through Ollama and cloud providers 
like OpenAI, giving you flexibility in how you process your notes.
Action Palette interface

Using the Action Palette

The Action Palette is perfect for one-time prompts and quick questions:
1

Open Action Palette

Press your hotkey (Cmd/Ctrl + J) or use the command palette to run Local GPT: Action Palette
2

Type Your Prompt

Type a question or instruction directly:
List 5 benefits of using local AI models
3

(Optional) Select Context Files

Click the folder icon to attach specific notes or PDFs as context
4

(Optional) Choose a System Prompt

Click the gear icon to apply a saved system prompt from your actions
5

Submit

Press Enter to run the prompt. The AI response appears at your cursor position.
The Action Palette remembers your selected provider and model across sessions.

Step 4: Try Enhanced Actions (RAG)

Enhanced Actions use RAG to pull context from your vault:
1

Configure Embedding Provider

  1. Install an embedding model:
    ollama pull nomic-embed-text
    
  2. Add it as a provider in AI Providers
  3. Set it as Embedding Provider in Local GPT settings
2

Create Linked Notes

Create a note with links to other notes:
# Project Overview

See [[Requirements]] and [[Timeline]] for details.

Can you summarize the requirements and timeline?
3

Run Action with Context

  1. Select the question text
  2. Open the context menu
  3. Choose an action like General help
Local GPT will automatically:
  • Read the content of Requirements.md and Timeline.md
  • Include that context in the AI prompt
  • Generate a response based on your entire knowledge base
RAG works with wiki links ([[note]]), markdown links, backlinks, and even PDF files. Learn more in the Enhanced Actions guide.

Step 5: Work with Images (Vision)

If you have a vision-capable model configured:
1

Configure Vision Provider

  1. Pull a vision model:
    ollama pull llava
    
  2. Add it as a provider in AI Providers
  3. Set it as Vision Provider in Local GPT settings
2

Embed an Image

Add an image to your note:
![[screenshot.png]]
What UI improvements would you suggest?
3

Select Image and Text

Highlight both the image embed and your question
4

Run Action

Choose General help from the context menu. The vision model will analyze the image and respond.

Common Workflows

  1. Write a rough outline or bullet points
  2. Select the text
  3. Use Continue writing to expand it into full paragraphs
  1. Write or paste meeting notes
  2. Select the entire note
  3. Use Find action items to extract actionable tasks
  1. Write your content
  2. Select a paragraph or section
  3. Use Fix spelling and grammar to polish it
  4. The text is replaced with the corrected version
  1. Open the Action Palette (Cmd/Ctrl + J)
  2. Type a question like “Explain quantum entanglement in simple terms”
  3. Press Enter - the answer appears at your cursor
  1. Create a note that links to multiple other notes
  2. Select a question or prompt
  3. Run General help - Local GPT reads all linked notes for context

Tips for Better Results

Be Specific

Instead of “improve this”, try “make this more concise” or “add technical details”

Use Context

Link to relevant notes to give AI more context about your work

Iterate

Run actions multiple times with different prompts to refine output

Create Custom Actions

Save frequently used prompts as custom actions for quick access

Keyboard Shortcuts Reference

ActionDefaultDescription
Show Context MenuCmd/Ctrl + MOpen action menu on selected text
Action PaletteCmd/Ctrl + JOpen palette for one-time prompts
EscapeEscCancel running AI request
You can customize these shortcuts in SettingsHotkeys

Troubleshooting

  • Make sure Local GPT is enabled in Community Plugins
  • Verify you have a Main Provider configured
  • Check if Ollama is running: ollama list
  • Local models depend on your hardware (CPU/GPU)
  • Try a smaller model like gemma2:2b or phi3
  • Consider using a cloud provider for faster responses
  • Ensure you have an Embedding Provider configured
  • Pull an embedding model: ollama pull nomic-embed-text
  • Check that your notes actually contain links to other files
  • Configure a separate Vision Provider in settings
  • Use a vision model like llava or bakllava
  • Ensure images are embedded with ![[image.png]] syntax
For more help, see the Troubleshooting guide.

Next Steps

Create Custom Actions

Build actions tailored to your specific workflows

Master the Action Palette

Learn advanced Action Palette features

Understand RAG

Deep dive into how Enhanced Actions work

Prompt Templating

Use advanced prompt templates with variables

Build docs developers (and LLMs) love