Skip to main content
Grip supports 15+ LLM providers with unified configuration. Each provider entry contains API keys, base URLs, and default models.

Configuration Structure

{
  "providers": {
    "provider_name": {
      "api_key": "your-api-key",
      "api_base": "https://api.example.com/v1",
      "default_model": "model-name"
    }
  }
}
providers.{name}.api_key
SecretStr
required
API key for this provider. Stored securely and never logged.Can be set via environment variable: GRIP_PROVIDERS__{NAME}__API_KEY
providers.{name}.api_base
string
default:""
Custom API base URL. Leave empty to use the provider’s default endpoint.
providers.{name}.default_model
string
default:""
Default model when using this provider. Optional.

Supported Providers

Cloud Providers

Best for: Access to 100+ models from multiple providers
providers.openrouter.api_key
string
required
OpenRouter API key from https://openrouter.ai/keys
# Set via CLI
grip config set providers.openrouter.api_key "sk-or-..."

# Or environment variable
export OPENROUTER_API_KEY="sk-or-..."
Popular models:
  • openrouter/anthropic/claude-opus-4.6
  • openrouter/anthropic/claude-sonnet-4.6
  • openrouter/openai/gpt-5.2
  • openrouter/google/gemini-2.5-pro
  • openrouter/x-ai/grok-4.1-fast
Default API base: https://openrouter.ai/api/v1
Best for: Claude models directly from Anthropic
providers.anthropic.api_key
string
required
Anthropic API key from https://console.anthropic.com/
grip config set providers.anthropic.api_key "sk-ant-..."

# Or environment variable
export ANTHROPIC_API_KEY="sk-ant-..."
Available models:
  • anthropic/claude-sonnet-4-20250514
  • anthropic/claude-haiku-4-5-20251001
  • anthropic/claude-opus-4-6 (via OpenRouter)
Default API base: https://api.anthropic.com/v1
Best for: GPT models and o1 reasoning models
providers.openai.api_key
string
required
grip config set providers.openai.api_key "sk-proj-..."

export OPENAI_API_KEY="sk-proj-..."
Available models:
  • openai/gpt-4o
  • openai/gpt-4o-mini
  • openai/o1
Default API base: https://api.openai.com/v1
Best for: Cost-effective reasoning models
providers.deepseek.api_key
string
required
DeepSeek API key from https://platform.deepseek.com/
grip config set providers.deepseek.api_key "sk-..."

export DEEPSEEK_API_KEY="sk-..."
Available models:
  • deepseek/deepseek-chat
  • deepseek/deepseek-reasoner
Default API base: https://api.deepseek.com/v1
Best for: Ultra-fast inference with LPU acceleration
providers.groq.api_key
string
required
Groq API key from https://console.groq.com/
grip config set providers.groq.api_key "gsk_..."

export GROQ_API_KEY="gsk_..."
Available models:
  • groq/llama-3.3-70b-versatile
  • groq/mixtral-8x7b-32768
Default API base: https://api.groq.com/openai/v1
Best for: Multimodal models with large context windows
providers.gemini.api_key
string
required
Google AI Studio API key from https://aistudio.google.com/
grip config set providers.gemini.api_key "AIza..."

export GEMINI_API_KEY="AIza..."
Available models:
  • gemini/gemini-2.5-pro
  • gemini/gemini-2.5-flash
Default API base: https://generativelanguage.googleapis.com/v1beta/openai
Best for: Chinese language tasks and Alibaba Cloud integration
providers.qwen.api_key
string
required
grip config set providers.qwen.api_key "sk-..."

export DASHSCOPE_API_KEY="sk-..."
Available models:
  • qwen/qwen-max
  • qwen/qwen-turbo
Default API base: https://dashscope.aliyuncs.com/compatible-mode/v1
Best for: Chinese language models
providers.minimax.api_key
string
required
MiniMax API key from https://api.minimax.chat/
grip config set providers.minimax.api_key "..."

export MINIMAX_API_KEY="..."
Available models:
  • minimax/abab6.5s-chat
Default API base: https://api.minimax.chat/v1
Best for: Long-context Chinese language models
providers.moonshot.api_key
string
required
Moonshot API key from https://platform.moonshot.cn/
grip config set providers.moonshot.api_key "sk-..."

export MOONSHOT_API_KEY="sk-..."
Available models:
  • moonshot/moonshot-v1-128k
Default API base: https://api.moonshot.cn/v1
Best for: Cloud-hosted Ollama models
providers.ollama_cloud.api_key
string
required
Ollama API key from https://ollama.com/
grip config set providers.ollama_cloud.api_key "..."

export OLLAMA_API_KEY="..."
Available models:
  • ollama_cloud/llama3.3
  • ollama_cloud/qwen2.5
  • ollama_cloud/deepseek-r1
  • ollama_cloud/mistral
Default API base: https://ollama.com/v1

Local Providers

Best for: Privacy-focused local inferenceNo API key required. Connects to local Ollama instance.
# Start Ollama
ollama serve

# Pull a model
ollama pull llama3.2

# Configure Grip to use local Ollama
grip config set agents.defaults.model "ollama/llama3.2"
Available models: Any model installed via ollama pullDefault API base: http://localhost:11434/v1
Best for: CPU/GPU inference without dependencies
# Start llama.cpp server
./llama-server -m model.gguf --port 8080

# Configure custom API base
grip config set providers.llamacpp.api_base "http://localhost:8080/v1"
Default API base: http://localhost:8080/v1
Best for: GUI-based local model managementNo API key required. Connects to LM Studio’s local server.
# Default LM Studio endpoint
grip config set providers.lmstudio.api_base "http://localhost:1234/v1"
Default API base: http://localhost:1234/v1

Setting API Keys

# Interactive config editor
grip config

# Set specific key
grip config set providers.anthropic.api_key "sk-ant-..."

Via Environment Variables

# Provider-specific env vars (preferred)
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-proj-..."
export OPENROUTER_API_KEY="sk-or-..."

# Or GRIP-prefixed vars
export GRIP_PROVIDERS__ANTHROPIC__API_KEY="sk-ant-..."

Via Config File

Edit ~/.grip/config.json:
{
  "providers": {
    "anthropic": {
      "api_key": "sk-ant-..."
    },
    "openai": {
      "api_key": "sk-proj-..."
    }
  }
}
API keys in config.json are stored as plain text. Use environment variables or system keychain for production deployments.

Custom Providers (OpenAI-Compatible)

Grip supports any OpenAI-compatible API:
{
  "providers": {
    "custom": {
      "api_key": "your-key",
      "api_base": "https://your-api.example.com/v1",
      "default_model": "your-model-name"
    }
  }
}
# Use custom provider
grip chat --model "custom/your-model-name"

Provider Detection

Grip uses two methods to detect providers:
  1. Prefix-based: anthropic/claude-sonnet-4 → Anthropic provider
  2. Explicit: --provider openrouter or agents.defaults.provider

Overriding Detection

Use explicit provider when model names are ambiguous:
{
  "agents": {
    "defaults": {
      "model": "openai/gpt-oss-120b",
      "provider": "openrouter"
    }
  }
}
This ensures openai/gpt-oss-120b routes through OpenRouter instead of OpenAI.

SecretStr Fields

API keys use Pydantic’s SecretStr type for security:
  • Never logged or printed
  • Masked in error messages
  • Scrubbed from tool outputs
  • Serialized as plain text in config.json
The SecretStr type provides runtime protection, but keys are stored unencrypted in config.json. For production, use environment variables or a secrets manager.

Testing Provider Configuration

# Test a specific provider
grip chat --model "anthropic/claude-sonnet-4" --message "Hello"

# Check provider detection
grip config get agents.defaults.model
grip config get agents.defaults.provider

Best Practices

  • Use environment variables for API keys in CI/CD
  • Never commit config.json with API keys to version control
  • Rotate keys regularly
  • Use separate keys for development and production
  • Use OpenRouter for access to multiple providers with one key
  • Set cheaper models in model_tiers.low for simple tasks
  • Use local providers (Ollama) for development
  • Monitor usage via provider dashboards
  • Configure fallback providers in model_tiers
  • Use OpenRouter for automatic failover
  • Set api_base for self-hosted deployments
  • Test provider connectivity before deployment

Troubleshooting

# Verify API key is set
grip config get providers.anthropic.api_key

# Check environment variables
echo $ANTHROPIC_API_KEY

# Test with explicit key
ANTHROPIC_API_KEY="sk-ant-..." grip chat --model "anthropic/claude-sonnet-4"
  • Verify provider name matches supported providers
  • Check model prefix (e.g., anthropic/ not claude/)
  • Use explicit --provider flag to override detection
  • Switch to a different provider temporarily
  • Use OpenRouter for automatic rate limit handling
  • Configure max_daily_tokens to control usage

Build docs developers (and LLMs) love