AI provider configuration
Configure AI providers in~/.clanker.yaml:
.clanker.yaml
.clanker.example.yaml:4 for the full example.
Supported providers
Gemini (via API)
Provider name:gemini-api
Available models:
gemini-2.5-flash(recommended, default)gemini-2.0-flash-expgemini-exp-1206gemini-pro
cmd/ask.go:1108 for API key resolution.
Gemini (via Google Cloud)
Provider name:gemini
Authentication: Uses Application Default Credentials (no API key needed)
Configuration:
OpenAI
Provider name:openai
Available models:
gpt-5(latest)gpt-4.5-turbogpt-4ogpt-4-turbogpt-3.5-turbo
cmd/ask.go:1126.
Anthropic Claude
Provider name:anthropic
Available models:
claude-3-5-sonnet-20241022(recommended)claude-3-opus-20240229claude-3-sonnet-20240229claude-3-haiku-20240307
cmd/ask.go:1143.
AWS Bedrock
Provider name:bedrock
Available models:
us.anthropic.claude-sonnet-4-20250514-v1:0(Claude Sonnet 4)us.anthropic.claude-3-5-sonnet-20241022-v2:0anthropic.claude-3-opus-20240229-v1:0anthropic.claude-3-sonnet-20240229-v1:0
bedrock:InvokeModel permissions:
DeepSeek
Provider name:deepseek
Available models:
deepseek-chat(general purpose)deepseek-reasoner(advanced reasoning)
cmd/ask.go:1162.
MiniMax
Provider name:minimax
Available models:
MiniMax-M2.5(latest)MiniMax-M2.5-highspeedMiniMax-M2.1MiniMax-M2.1-highspeedMiniMax-M2
cmd/ask.go:1180.
Using profiles
Default provider
Thedefault_provider is used for all queries unless overridden:
Override with flag
Use--ai-profile to override the default provider:
openai provider configuration from your config file.
Override model
Override the model for a specific provider:cmd/ask.go:1235 for model override logic.
Override API key
Provide API keys at runtime without storing them in config:Profile resolution order
Clanker resolves AI configuration in this order (highest priority first):- Command-line flags:
--ai-profile,--openai-key,--openai-model, etc. - Config file provider settings:
ai.providers.<provider>.api_key,ai.providers.<provider>.model - Environment variables:
OPENAI_API_KEY,GEMINI_API_KEY, etc. - Config file defaults:
ai.default_provider - Hardcoded fallback:
openai
.clanker.yaml
API key resolution
Gemini API key
Resolution order:--gemini-keyflagai.providers.gemini-api.api_keyin config- Environment variable from
ai.providers.gemini-api.api_key_env GEMINI_API_KEYenvironment variable
cmd/ask.go:1108.
OpenAI API key
Resolution order:--openai-keyflagai.providers.openai.api_keyin config- Environment variable from
ai.providers.openai.api_key_env OPENAI_API_KEYenvironment variable
cmd/ask.go:1126.
Other providers
Similar resolution order applies to:- Anthropic (
--anthropic-key,ANTHROPIC_API_KEY) - DeepSeek (
--deepseek-key,DEEPSEEK_API_KEY) - MiniMax (
--minimax-key,MINIMAX_API_KEY)
Model override logic
When you provide a model flag, Clanker updates the provider’s model setting dynamically:cmd/ask.go:1235.
Use cases for custom profiles
Development vs. production
Development vs. production
Use different models for dev and prod:
Cost optimization
Cost optimization
Use cheaper models for simple queries:
Privacy and compliance
Privacy and compliance
Use AWS Bedrock for data residency requirements:
Advanced reasoning
Advanced reasoning
Use specialized models for complex tasks:
Multi-tenant environments
Multi-tenant environments
Different API keys per team:
Maker mode profiles
Maker mode (infrastructure plan generation) also respects AI profiles:cmd/ask.go:260 for maker mode AI resolution.
Debugging profiles
Check which provider and model are being used:Best practices
Use environment variables
Store API keys in environment variables, not config files:
Set a default provider
Configure a default provider to avoid specifying
--ai-profile every time:Use cost-effective models
Default to cheaper/faster models, override for complex tasks:
Test profiles
Verify provider configuration before relying on it:
Troubleshooting
Provider not found
Error:Missing API key
Error:Wrong model
Error:Bedrock permissions
Error:Related resources
Configuration
Config file structure and provider setup
Debugging
Debug provider and model resolution
Ask command
CLI flags for AI profiles
Maker mode
Infrastructure plan generation