What is AI Providers?
AI Providers is a configuration hub for managing AI settings in Obsidian. Think of it as a central control panel where you can store your API keys and AI provider settings once, then share them across all your Obsidian plugins that need AI capabilities.Important: AI Providers is a configuration tool—it doesn’t do any AI processing itself. It helps other plugins connect to AI services more easily by managing settings in one place.
Why AI Providers?
Without AI Providers, each AI-powered plugin would need its own settings page for API keys, endpoints, and model selection. This creates several problems:- Duplicate configuration: Enter the same API keys multiple times
- Harder maintenance: Update credentials separately in each plugin
- Inconsistent experience: Different UIs and setup flows
- Development overhead: Plugin developers must build provider integrations from scratch
For Users
Configure AI providers once, use them everywhere. Switch between OpenAI, Ollama, Claude, and 15+ other providers without reconfiguring each plugin.
For Developers
Skip building provider integrations. Use the AI Providers SDK to instantly support all providers with just a few lines of code.
Key Benefits
Centralized Configuration
Manage all your AI providers in one settings panel:- Store API keys securely
- Configure provider URLs and endpoints
- Select models from refreshable lists
- Enable/disable providers as needed
Wide Provider Support
AI Providers supports 18+ AI providers out of the box:Cloud Providers
- OpenAI
- Anthropic
- Google Gemini
- OpenRouter
- Mistral AI
- Groq
- Perplexity AI
- DeepSeek
- xAI (Grok)
Local & Self-Hosted
- Ollama
- LM Studio
- Open WebUI
- llama.cpp
- LocalAI
Additional Services
- Together AI
- Fireworks AI
- Cerebras
- DeepInfra
- SambaNova
- And more…
Developer-Friendly SDK
The AI Providers SDK makes it simple to add AI capabilities to your plugin:The SDK handles all provider-specific API differences, streaming, error handling, and even caching for embeddings.
Architecture Overview
AI Providers follows a simple, extensible architecture:How It Works
- Settings Layer: Users configure providers in the plugin settings UI
- Service Layer:
AIProvidersServicemanages provider instances and exposes them via the SDK - Handler Layer: Provider-specific handlers (OpenAI, Anthropic, Ollama) translate requests to the right API format
- Transport Layer:
FetchSelectorchooses the appropriate fetch method (Obsidian, Electron, or native) - Cache Layer: Embeddings are cached in IndexedDB to avoid redundant API calls
Supported Features
- Text Generation: Stream completions from any provider with
execute() - Embeddings: Generate vector embeddings with
embed()(cached automatically) - RAG Search: Semantic search with
retrieve()for retrieval-augmented generation - Messages API: Support for multi-turn conversations and system prompts
- Image Analysis: Send images to vision models like GPT-4V
- Abort Control: Cancel in-progress requests with
AbortController - Progress Tracking: Monitor embedding and retrieval progress
- Model Discovery: Fetch available models dynamically from providers
Multilingual Support
AI Providers is translated into 11 languages: English, Spanish, French, Italian, Portuguese, German, Russian, Chinese, Japanese, Korean, and Dutch.Next Steps
Installation
Install AI Providers from the Obsidian community plugin store
Quick Start
Set up your first AI provider in under 5 minutes
Plugins Using AI Providers
AI Providers is used by plugins like:- Local GPT: Privacy-focused AI assistant with local models
- More plugins integrating soon…