Overview
Loaf supports multiple AI providers and models with dynamic discovery, caching, and flexible thinking level controls. You can switch between OpenAI, OpenRouter, and Antigravity providers with model-specific configurations.Supported Providers
- OpenAI - Direct OpenAI API with OAuth or API key
- OpenRouter - Multi-provider routing service
- Antigravity - Google Cloud-based AI platform
Model Discovery
Dynamic Model Lists
Loaf fetches model catalogs from providers and caches them locally:OpenAI Models
- Remote catalog - Fetch fresh model list from OpenAI
- Fresh cache - Use cached list if < 5 minutes old
- Stale cache - Use older cache if remote fails
- Fallback - Use hardcoded defaults if all else fails
OpenRouter Models
Model Options
Model Metadata
Model Identification
Extract model names from IDs:Thinking Levels
Available Levels
- OFF - No extended reasoning
- MINIMAL - Minimal reasoning
- LOW - Light reasoning
- MEDIUM - Moderate reasoning (typical default)
- HIGH - Deep reasoning
- XHIGH - Extra deep reasoning (Codex models only)
Model-Specific Levels
Different models support different thinking levels:Default Thinking Level
Loaf selects a default thinking level based on:- Server-provided default (if available)
- Middle of supported range
Model Caching
Cache Storage
Models are cached in~/.loaf/models-cache.json:
Cache TTL
Reading from Cache
Model Ranking
OpenAI models are ranked by priority (src/models.ts:361):- Server-provided priority (if available)
- Loaf’s internal ranking
- Alphabetical order
Model Filtering
OpenAI Text Models
Loaf filters for text-capable models (src/models.ts:323):Size Limits
Provider Configuration
Get default models for a provider:Context Window Sizes
Context window information is included when available:Usage Examples
Check Model Capabilities
Filter by Capability
Best Practices
Model Selection
- Check capabilities - Verify model supports required thinking levels
- Consider context - Choose models with appropriate context windows
- Test performance - Different models excel at different tasks
- Monitor costs - Higher capability models may have higher costs
Caching Strategy
- Accept stale cache - Use for offline/degraded scenarios
- Force refresh - Require fresh data for critical operations
- Handle failures - Always provide fallback defaults
Provider Switching
- Consistent interface - All providers use same ModelOption format
- Capability parity - Check feature support per provider
- Authentication - Ensure credentials are configured for target provider
Source Code Reference
src/models.ts- Model discovery and management (src/models.ts:1)src/config.ts- Provider and thinking level types (src/config.ts:1)src/openai.ts- OpenAI catalog integrationsrc/openrouter.ts- OpenRouter model listingsrc/antigravity.ts- Antigravity model support
Related
- Authentication - Configure provider credentials
- Slash Commands - Using /model command