~/.rowboat/config/models.json.
Configuration File
The models configuration is stored in:Configuration Schema
Schema Fields
| Field | Type | Required | Description |
|---|---|---|---|
provider.flavor | string | Yes | Provider type (see supported providers below) |
provider.apiKey | string | Conditional | API key (not required for Ollama) |
provider.baseURL | string | No | Custom API endpoint |
provider.headers | object | No | Additional HTTP headers |
model | string | Yes | Model identifier |
knowledgeGraphModel | string | No | Model for knowledge graph processing (defaults to main model) |
Supported Providers
OpenAI
Use OpenAI’s GPT models.Default baseURL:
https://api.openai.com/v1Popular models:gpt-4o- Latest GPT-4 optimizedgpt-4o-mini- Faster, more affordablegpt-4-turbo- Previous generation
Anthropic
Use Claude models from Anthropic.Default baseURL:
https://api.anthropic.com/v1Popular models:claude-sonnet-4-5- Latest Sonnetclaude-opus-4- Most capableclaude-haiku-4- Fast and efficient
Use Google’s Gemini models.Default baseURL:
https://generativelanguage.googleapis.com/v1betaPopular models:gemini-2.5-pro- Latest Pro modelgemini-2.5-flash- Fast responsesgemini-pro-1.5- Previous generation
Ollama (Local)
Run models locally with Ollama.Default baseURL:
No API key required for Ollama. Make sure Ollama is running locally.
http://localhost:11434Popular models:llama3.1- Meta’s Llama 3.1mistral- Mistral 7Bmixtral- Mixtral 8x7B
OpenRouter
Access multiple models through OpenRouter.Default baseURL:
https://openrouter.ai/api/v1Special models:openrouter/auto- Automatically selects best model- Or use any model from their catalog
OpenAI-Compatible
Use any OpenAI-compatible API (LM Studio, vLLM, etc.).
Perfect for self-hosted solutions like LM Studio, vLLM, or text-generation-webui.
Environment Variables
You can use environment variables instead of hardcoding API keys:apiKey field in your config:
Knowledge Graph Model
Optionally specify a separate (typically faster/cheaper) model for knowledge graph processing:The knowledge graph model is used for extracting entities and relationships from emails and meetings. Using a faster model can significantly reduce processing time.
Custom Headers
Add custom HTTP headers for advanced use cases:Testing Configuration
After creating your config file, Rowboat will automatically test the connection on startup. Check the logs for any connection errors.Switching Models
You can switch models at any time by editingmodels.json. Changes take effect after restarting Rowboat.
Cost Optimization Tips
- Use different models for different tasks: Set a cheaper
knowledgeGraphModelfor background processing - Try local models: Use Ollama for privacy and zero API costs
- OpenRouter auto: Let OpenRouter pick the best price/performance model
- Monitor usage: Keep track of API costs in your provider dashboard