Skip to main content
Rowboat Desktop works with any AI provider - from hosted APIs like OpenAI and Anthropic to fully local models via Ollama. This guide covers how to configure your model setup.

Configuration File

Model settings are stored in:
~/.rowboat/config/models.json
Structure:
{
  "provider": {
    "flavor": "openai",
    "apiKey": "sk-...",
    "baseURL": "https://api.openai.com/v1",
    "headers": {}
  },
  "model": "gpt-5.2",
  "knowledgeGraphModel": "gpt-4.1"
}
You can configure a separate model for knowledge graph operations. If knowledgeGraphModel is omitted, Rowboat uses the main model for all operations.

Configuring via UI

The easiest way to configure models is through the Settings dialog:
1

Open Settings

Click the settings icon in the sidebar or press Cmd+, (Mac) / Ctrl+, (Windows/Linux)
2

Select Provider

Choose from OpenAI, Anthropic, Google, Ollama, OpenRouter, or other providers
3

Enter Credentials

  • Cloud providers: Enter your API key
  • Local providers: Set the base URL (e.g., http://localhost:11434 for Ollama)
4

Choose Models

  • Assistant model: The main model for chat and tasks
  • Knowledge graph model: (Optional) A different model for graph operations
5

Test & Save

Click “Test & Save” to verify the connection and save your configuration

Supported Providers

OpenAI

Use GPT models from OpenAI.
Provider flavor: openaiRequired:Optional:
  • baseURL - Custom API endpoint (defaults to OpenAI’s official endpoint)
  • headers - Additional HTTP headers
Recommended models:
  • gpt-5.2 - Most capable (if you have access)
  • gpt-4.1 - Excellent performance
  • gpt-4o-mini - Fast and cost-effective

Anthropic

Use Claude models from Anthropic.
Provider flavor: anthropicRequired:Optional:
  • baseURL - Custom API endpoint
  • headers - Additional HTTP headers
Recommended models:
  • claude-opus-4-6-20260202 - Most capable Claude model
  • claude-sonnet-4-6-20260202 - Balanced performance
  • claude-3-5-sonnet-20241022 - Fast and cost-effective

Google AI Studio

Use Gemini models from Google.
Provider flavor: googleRequired:Optional:
  • baseURL - Custom API endpoint
  • headers - Additional HTTP headers
Available models:
  • gemini-2.0-flash-exp - Latest experimental model
  • gemini-1.5-pro - Production-ready
  • gemini-1.5-flash - Fast and efficient

Ollama (Local)

Run models locally on your machine with Ollama.
1

Install Ollama

Download from ollama.ai and install on your system
2

Pull a model

ollama pull llama3.3:70b
# or
ollama pull qwen2.5-coder:32b
3

Verify it's running

ollama list
4

Configure Rowboat

Set provider to “Ollama (Local)” and enter the model name
Ollama connection tests have a 60-second timeout (vs. 8 seconds for cloud providers) to accommodate model loading time.

OpenRouter

Access multiple models with one API key via OpenRouter.
Provider flavor: openrouterRequired:Optional:
  • baseURL - Custom endpoint (defaults to OpenRouter’s API)
  • headers - Additional headers (e.g., for site identification)
Example models:
  • openai/gpt-4-turbo
  • anthropic/claude-3-opus
  • google/gemini-pro-1.5
  • meta-llama/llama-3.3-70b-instruct

Vercel AI Gateway

Route requests through Vercel’s AI Gateway for observability and caching.
Provider flavor: aigatewayRequired:
  • apiKey - Your provider’s API key (OpenAI, Anthropic, etc.)
  • baseURL - Your AI Gateway endpoint from vercel.com/dashboard
Optional:
  • headers - Additional HTTP headers

OpenAI-Compatible APIs

Use any OpenAI-compatible API (LM Studio, LocalAI, etc.).
Provider flavor: openai-compatibleRequired:
  • baseURL - Your API endpoint (e.g., http://localhost:1234/v1 for LM Studio)
  • model - The model name to use
Optional:
  • apiKey - API key if required by your server
  • headers - Additional HTTP headers

Advanced Configuration

Custom Headers

Add custom HTTP headers to requests:
{
  "provider": {
    "flavor": "openai",
    "apiKey": "sk-...",
    "headers": {
      "X-Custom-Header": "value",
      "Organization": "org-..."
    }
  },
  "model": "gpt-4"
}

Separate Knowledge Graph Model

Use a different model for knowledge graph operations:
{
  "provider": {
    "flavor": "openai",
    "apiKey": "sk-..."
  },
  "model": "gpt-5.2",
  "knowledgeGraphModel": "gpt-4o-mini"
}
Why use a separate model?
  • Cost optimization - Use a cheaper model for graph operations
  • Speed - Use a faster model for background processing
  • Quality - Use a more capable model for chat, simpler one for extraction
If knowledgeGraphModel is omitted or empty, Rowboat uses the main model for all operations.

Connection Timeout

Rowboat tests model connections before saving:
  • Cloud providers: 8-second timeout
  • Local providers (Ollama, OpenAI-compatible): 60-second timeout
This allows time for local models to load into memory on first request.

Models Catalog

Rowboat caches a catalog of available models for OpenAI, Anthropic, and Google:
~/.rowboat/config/models.dev.json
This catalog powers the model dropdown in Settings. It’s automatically fetched and cached when you open the Settings dialog.
For local providers (Ollama, OpenAI-compatible), you’ll type the model name manually since available models vary by installation.

Troubleshooting

Connection Test Fails

1

Verify API key

Ensure your API key is valid and has not expired
2

Check base URL

For local providers, ensure the service is running:
# Ollama
ollama list

# LM Studio - check the server tab
3

Test manually

# OpenAI
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"

# Ollama
curl http://localhost:11434/api/tags
4

Check firewall

Ensure your firewall allows outbound connections (cloud) or localhost connections (local)

Model Not Found

For cloud providers:
  • Verify the model name matches the provider’s documentation
  • Check if you have access to that model (e.g., GPT-5 requires special access)
For Ollama:
ollama list  # See available models
ollama pull llama3.3:70b  # Download if needed
For OpenAI-compatible:
  • Check your server’s available models endpoint
  • Ensure the model is loaded and ready

Knowledge Graph Model Not Working

  1. Test the model separately by setting it as the main model
  2. Check that it supports the same capabilities (function calling, etc.)
  3. Verify sufficient context window (knowledge graph operations can be token-heavy)

Best Practices

Start with defaults

Use recommended models first (gpt-5.2, claude-opus-4-6, etc.)

Test before saving

Always use “Test & Save” to verify your configuration works

Consider costs

Use a cheaper model for knowledge graph if you process many emails

Try local models

Ollama with llama3.3:70b is comparable to GPT-4 and completely private

Next Steps

Explore Features

Learn what you can do with your configured models

Understand Workspace

Explore the ~/.rowboat/ directory structure

Build docs developers (and LLMs) love