Skip to main content

Overview

This guide walks you through setting up your first AI provider from scratch. We’ll cover both a local option (Ollama—free and private) and a cloud option (OpenAI—powerful but requires an API key).
Choose Ollama if you want completely offline AI with no costs. Choose OpenAI if you want access to GPT-4 and don’t mind paying for API usage.

Option 1: Set Up Ollama (Local & Free)

Ollama is the easiest way to run AI models locally on your computer. It’s completely free, private, and works offline.
1
Install Ollama
2
  • Visit ollama.com
  • Download Ollama for your operating system:
    • macOS: Download the .dmg and install
    • Linux: Run the install script: curl -fsSL https://ollama.com/install.sh | sh
    • Windows: Download the installer and run it
  • Verify Ollama is running:
    • macOS/Windows: Look for the Ollama icon in your system tray
    • Linux: Ollama runs as a background service
  • 3
    Download a Model
    4
    Ollama needs at least one model to work. Download a recommended model:
    5
    # Recommended: Gemma 2 (2B - fast and capable)
    ollama pull gemma2
    
    # Alternative: Llama 3.2 (3B - more powerful)
    ollama pull llama3.2
    
    # Alternative: Mistral (7B - even more capable, slower)
    ollama pull mistral
    
    6
    You can browse all available models at ollama.com/library. Smaller models (2-3B parameters) are faster but less capable. Larger models (7B+) are more powerful but slower.
    7
    Verify Ollama is Working
    8
    Test that Ollama is responding:
    9
    ollama list
    
    10
    You should see your downloaded model(s) listed. If this works, Ollama is ready!
    11
    Configure Ollama in AI Providers
    12
  • Open Obsidian and go to Settings → AI Providers
  • Click Add Provider
  • In the provider form:
    • Provider Type: Select Ollama
    • Name: Enter a friendly name like “Ollama Local”
    • Provider URL: Should auto-fill to http://localhost:11434 (the default Ollama endpoint)
    • API Key: Leave empty (not needed for Ollama)
  • Click the refresh icon next to Model to fetch available models
  • Select your model from the dropdown (e.g., gemma2:latest)
  • Click Save
  • 13
    If the model refresh fails, make sure Ollama is running. On macOS, check the menu bar. On Linux, run sudo systemctl status ollama. On Windows, check the system tray.
    14
    Test Your Ollama Provider
    15
    Let’s verify it works:
    16
  • Install a plugin that uses AI Providers, like Local GPT
  • Or test directly in the developer console:
    • Press Cmd/Ctrl + Shift + I to open Developer Tools
    • Go to the Console tab
    • Paste this code:
  • 17
    const { waitForAI } = require('@obsidian-ai-providers/sdk');
    
    (async () => {
        const aiResolver = await waitForAI();
        const aiProviders = await aiResolver.promise;
        
        console.log('Available providers:', aiProviders.providers.length);
        
        const response = await aiProviders.execute({
            provider: aiProviders.providers[0],
            prompt: "Say hello in one sentence.",
            onProgress: (chunk, full) => console.log(full)
        });
        
        console.log('Final response:', response);
    })();
    
    18
    You should see the AI’s response streaming in the console.

    Troubleshooting Ollama

    Cause: Ollama isn’t running or isn’t accessible.Solutions:
    • Make sure Ollama is running (check system tray/menu bar)
    • Verify the URL is http://localhost:11434
    • Test manually: open http://localhost:11434 in your browser (should show “Ollama is running”)
    • Check firewall settings aren’t blocking localhost connections
    Cause: CORS issues with Ollama’s default configuration.Solution: Set the OLLAMA_ORIGINS environment variable to allow all origins:
    • macOS: Run launchctl setenv OLLAMA_ORIGINS "*" then restart Ollama
    • Linux: Add Environment="OLLAMA_ORIGINS=*" to /etc/systemd/system/ollama.service and run sudo systemctl daemon-reload && sudo systemctl restart ollama
    • Windows: Set system environment variable OLLAMA_ORIGINS=* and restart Ollama
    See Ollama FAQ for more details.
    Cause: Model is too large for your hardware.Solution: Try a smaller model:
    • gemma2:2b (2 billion parameters - very fast)
    • llama3.2:3b (3 billion parameters - balanced)
    • qwen2.5:3b (3 billion parameters - good for coding)

    Option 2: Set Up OpenAI (Cloud)

    OpenAI provides access to GPT-4 and other powerful models through their API. You’ll need an API key and will be charged based on usage.
    1
    Create an OpenAI Account
    2
  • Go to platform.openai.com
  • Sign up or log in to your account
  • Add a payment method (required for API access)
  • Navigate to API Keys in the left sidebar
  • 3
    Generate an API Key
    4
  • Click Create new secret key
  • Give it a name like “Obsidian AI Providers”
  • Set permissions to All or customize as needed
  • Click Create secret key
  • Copy the key immediately—you won’t be able to see it again!
  • 5
    Keep your API key secure! Don’t share it or commit it to version control. If you lose it, you’ll need to create a new one.
    6
    Configure OpenAI in AI Providers
    7
  • Open Obsidian and go to Settings → AI Providers
  • Click Add Provider
  • In the provider form:
    • Provider Type: Select OpenAI
    • Name: Enter “OpenAI” or “GPT-4”
    • Provider URL: Enter https://api.openai.com/v1
    • API Key: Paste your API key from step 2
  • Click the refresh icon next to Model to fetch available models
  • Select your preferred model:
    • gpt-4o (recommended - fast and capable)
    • gpt-4o-mini (cheaper, good for most tasks)
    • gpt-3.5-turbo (cheapest option)
  • Click Save
  • 8
    Test Your OpenAI Provider
    9
    Verify your OpenAI connection works:
    10
  • Open Developer Tools (Cmd/Ctrl + Shift + I)
  • Go to the Console tab
  • Run this test:
  • 11
    const { waitForAI } = require('@obsidian-ai-providers/sdk');
    
    (async () => {
        const aiResolver = await waitForAI();
        const aiProviders = await aiResolver.promise;
        
        // Find your OpenAI provider
        const openai = aiProviders.providers.find(p => p.type === 'openai');
        
        if (!openai) {
            console.error('OpenAI provider not found');
            return;
        }
        
        console.log('Testing OpenAI with model:', openai.model);
        
        const response = await aiProviders.execute({
            provider: openai,
            prompt: "What is the capital of France? Answer in one word.",
            onProgress: (chunk, full) => console.log('Streaming:', full)
        });
        
        console.log('Final response:', response);
    })();
    
    12
    You should see “Paris” (or similar) stream in the console.

    Troubleshooting OpenAI

    Cause: Incorrect or expired API key.Solutions:
    • Double-check you copied the entire key (starts with sk-)
    • Make sure there are no extra spaces before/after the key
    • Verify the key hasn’t been revoked at platform.openai.com/api-keys
    • Create a new API key if the old one doesn’t work
    Cause: You’ve hit OpenAI’s usage limits or your account doesn’t have credits.Solutions:
    • Check your usage at platform.openai.com/usage
    • Add credits to your account or upgrade your plan
    • Wait for your rate limit to reset (usually per-minute limits)
    • Use a smaller/cheaper model like gpt-4o-mini or gpt-3.5-turbo
    Cause: Your account doesn’t have access to the selected model.Solution: Not all accounts have immediate access to all models (especially GPT-4). Try:
    • gpt-4o-mini (available to all paid accounts)
    • gpt-3.5-turbo (available to all accounts)
    • Check platform.openai.com/docs/models for your account tier’s model access

    Using Multiple Providers

    You can configure as many providers as you want! For example:
    • Ollama for quick, free local inference
    • OpenAI GPT-4o for complex reasoning tasks
    • Anthropic Claude for long-context work
    • Groq for ultra-fast inference

    Manage multiple providers

    Each provider configuration is independent. Plugins that use AI Providers typically let you choose which provider to use for each task.

    Next Steps

    Now that you have a working AI provider, explore what you can do:

    User Guide

    Learn how to manage providers, switch models, and troubleshoot issues

    Supported Providers

    Explore all 18+ supported providers and their setup instructions

    Plugin Integration

    Learn how to manage and configure AI providers

    Developer SDK

    Build your own AI-powered plugins using the SDK

    Advanced: Testing Embeddings & RAG

    If you’re interested in embeddings or semantic search (RAG), here’s a quick test:
    const { waitForAI } = require('@obsidian-ai-providers/sdk');
    
    (async () => {
        const aiResolver = await waitForAI();
        const aiProviders = await aiResolver.promise;
        
        // Test embedding generation
        const embedding = await aiProviders.embed({
            provider: aiProviders.providers[0],
            input: "This is a test sentence."
        });
        
        console.log('Embedding length:', embedding.length);
        console.log('First 10 dimensions:', embedding.slice(0, 10));
        
        // Test semantic search
        const documents = [
            { content: "Paris is the capital of France.", meta: { id: 1 } },
            { content: "London is the capital of England.", meta: { id: 2 } },
            { content: "Berlin is the capital of Germany.", meta: { id: 3 } }
        ];
        
        const results = await aiProviders.retrieve({
            query: "What is the capital of France?",
            documents: documents,
            embeddingProvider: aiProviders.providers[0]
        });
        
        console.log('Top result:', results[0].content);
        console.log('Relevance score:', results[0].score);
    })();
    
    Embeddings are automatically cached in IndexedDB, so repeated embedding of the same text is instant and free!

    Getting Help

    If you run into issues:
    • Check troubleshooting sections in provider-specific guides
    • Review provider-specific setup docs in Providers section
    • Open an issue on GitHub
    • Contact the developer on Telegram: @pavel_frankov

    Build docs developers (and LLMs) love