Configuration File
Model settings are stored in:You can configure a separate model for knowledge graph operations. If
knowledgeGraphModel is omitted, Rowboat uses the main model for all operations.Configuring via UI
The easiest way to configure models is through the Settings dialog:Enter Credentials
- Cloud providers: Enter your API key
- Local providers: Set the base URL (e.g.,
http://localhost:11434for Ollama)
Choose Models
- Assistant model: The main model for chat and tasks
- Knowledge graph model: (Optional) A different model for graph operations
Supported Providers
OpenAI
Use GPT models from OpenAI.- Configuration
- Example Config
Provider flavor:
openaiRequired:apiKey- Your OpenAI API key from platform.openai.com
baseURL- Custom API endpoint (defaults to OpenAI’s official endpoint)headers- Additional HTTP headers
gpt-5.2- Most capable (if you have access)gpt-4.1- Excellent performancegpt-4o-mini- Fast and cost-effective
Anthropic
Use Claude models from Anthropic.- Configuration
- Example Config
Provider flavor:
anthropicRequired:apiKey- Your Anthropic API key from console.anthropic.com
baseURL- Custom API endpointheaders- Additional HTTP headers
claude-opus-4-6-20260202- Most capable Claude modelclaude-sonnet-4-6-20260202- Balanced performanceclaude-3-5-sonnet-20241022- Fast and cost-effective
Google AI Studio
Use Gemini models from Google.- Configuration
- Example Config
Provider flavor:
googleRequired:apiKey- Your Google AI API key from aistudio.google.com
baseURL- Custom API endpointheaders- Additional HTTP headers
gemini-2.0-flash-exp- Latest experimental modelgemini-1.5-pro- Production-readygemini-1.5-flash- Fast and efficient
Ollama (Local)
Run models locally on your machine with Ollama.- Setup
- Configuration
- Example Config
Install Ollama
Download from ollama.ai and install on your system
Ollama connection tests have a 60-second timeout (vs. 8 seconds for cloud providers) to accommodate model loading time.
OpenRouter
Access multiple models with one API key via OpenRouter.- Configuration
- Example Config
Provider flavor:
openrouterRequired:apiKey- Your OpenRouter API key from openrouter.ai
baseURL- Custom endpoint (defaults to OpenRouter’s API)headers- Additional headers (e.g., for site identification)
openai/gpt-4-turboanthropic/claude-3-opusgoogle/gemini-pro-1.5meta-llama/llama-3.3-70b-instruct
Vercel AI Gateway
Route requests through Vercel’s AI Gateway for observability and caching.- Configuration
- Example Config
Provider flavor:
aigatewayRequired:apiKey- Your provider’s API key (OpenAI, Anthropic, etc.)baseURL- Your AI Gateway endpoint from vercel.com/dashboard
headers- Additional HTTP headers
OpenAI-Compatible APIs
Use any OpenAI-compatible API (LM Studio, LocalAI, etc.).- Configuration
- Example Config
Provider flavor:
openai-compatibleRequired:baseURL- Your API endpoint (e.g.,http://localhost:1234/v1for LM Studio)model- The model name to use
apiKey- API key if required by your serverheaders- Additional HTTP headers
Advanced Configuration
Custom Headers
Add custom HTTP headers to requests:Separate Knowledge Graph Model
Use a different model for knowledge graph operations:- Cost optimization - Use a cheaper model for graph operations
- Speed - Use a faster model for background processing
- Quality - Use a more capable model for chat, simpler one for extraction
If
knowledgeGraphModel is omitted or empty, Rowboat uses the main model for all operations.Connection Timeout
Rowboat tests model connections before saving:- Cloud providers: 8-second timeout
- Local providers (Ollama, OpenAI-compatible): 60-second timeout
Models Catalog
Rowboat caches a catalog of available models for OpenAI, Anthropic, and Google:For local providers (Ollama, OpenAI-compatible), you’ll type the model name manually since available models vary by installation.
Troubleshooting
Connection Test Fails
Model Not Found
For cloud providers:- Verify the model name matches the provider’s documentation
- Check if you have access to that model (e.g., GPT-5 requires special access)
- Check your server’s available models endpoint
- Ensure the model is loaded and ready
Knowledge Graph Model Not Working
- Test the model separately by setting it as the main
model - Check that it supports the same capabilities (function calling, etc.)
- Verify sufficient context window (knowledge graph operations can be token-heavy)
Best Practices
Start with defaults
Use recommended models first (
gpt-5.2, claude-opus-4-6, etc.)Test before saving
Always use “Test & Save” to verify your configuration works
Consider costs
Use a cheaper model for knowledge graph if you process many emails
Try local models
Ollama with llama3.3:70b is comparable to GPT-4 and completely private
Next Steps
Explore Features
Learn what you can do with your configured models
Understand Workspace
Explore the ~/.rowboat/ directory structure