Supported Providers
OpenAI
GPT-4 models for high-quality commit messagesDefault: gpt-4o
Anthropic Claude
Advanced reasoning with Claude SonnetDefault: claude-sonnet-4-5
Google Gemini
Fast and efficient with Gemini FlashDefault: gemini-2.0-flash
xAI Grok
Grok’s latest modelsDefault: grok-2-latest
Meta Llama
Open-weight models via Llama APIDefault: llama-3-70b-instruct
DeepSeek
Cost-effective Chinese AI providerDefault: deepseek-chat
GitHub Models
AI models via GitHub’s platformDefault: gpt-4o
Ollama
Run models locally on your machineDefault: llama3.2:latest
Free (LLM7.io)
No API key required - free tierDefault: N/A
Quick Start
Using a Specific Model
Specify a model with the--model or -m flag:
Model Variants
Each provider supports multiple model variants. Specify with--model-variant or -v:
Provider Details
OpenAI
- Overview
- Setup
- Variants
OpenAI’s GPT models are known for high-quality, coherent commit messages.Default Variant:
gpt-4oAPI Key Required: YesAPI Endpoint: https://api.openai.com/v1/chat/completionsAnthropic Claude
- Overview
- Setup
- Variants
Claude excels at understanding context and writing clear, technical commit messages.Default Variant:
claude-sonnet-4-5-20250929API Key Required: YesAPI Endpoint: https://api.anthropic.com/v1/messagesGoogle Gemini
- Overview
- Setup
- Variants
Gemini offers fast response times with good quality output.Default Variant:
gemini-2.0-flashAPI Key Required: YesAPI Endpoint: https://generativelanguage.googleapis.com/v1beta/models/xAI Grok
- Overview
- Setup
Grok models from xAI, known for being up-to-date and witty.Default Variant:
grok-2-latestAPI Key Required: YesMeta Llama
- Overview
- Setup
Access to Meta’s Llama models via API.Default Variant:
llama-3-70b-instructAPI Key Required: YesDeepSeek
- Overview
- Setup
Cost-effective AI models with good performance.Default Variant:
deepseek-chatAPI Key Required: YesGitHub Models
- Overview
- Setup
Access AI models through GitHub’s platform.Default Variant:
gpt-4oAPI Key Required: Yes (GitHub token)Ollama (Local)
- Overview
- Setup
- Variants
Run AI models locally without sending data to external APIs.Default Variant:
llama3.2:latestAPI Key Required: NoRequirements: Ollama must be installed and runningFree Tier (LLM7.io)
- Overview
- Usage
- When to Use
Free, anonymous access to AI models - no API key required.Powered by: LLM7.ioAPI Key Required: NoLimitations:
- 8,000 characters per request
- 60 requests/hour
- 10 requests/minute
- 1 request/second
Setting Default Model
Set your preferred model as the default:Model Comparison
- Quality
- Speed
- Cost
- Privacy
Based on commit message quality:
- Claude (Opus/Sonnet) - Most detailed and contextual
- OpenAI (GPT-4o) - Excellent balance of quality and speed
- Gemini (2.0 Flash) - Good quality, very fast
- Grok - Good quality with personality
- DeepSeek - Solid quality, budget-friendly
- Llama - Good for technical commits
- GitHub Models - Similar to OpenAI
- Ollama - Varies by model, privacy-focused
- Free - Basic quality, rate-limited
Code Implementation
The model factory is implemented incommit_generator_factory.dart:22:
model_variants.dart:10:
Best Practices
Try Multiple Models
Different models have different strengths. Try a few to find your favorite.
Use Variants Wisely
Use faster/cheaper variants for simple commits, powerful ones for complex changes.
Go Local for Privacy
Use Ollama if you’re working with sensitive code.
Set Sensible Defaults
Configure your preferred model as default to save time.
Related Features
Interactive Confirmation
Try different models during the confirmation workflow
API Key Management
Learn how to manage API keys for each provider
Configuration
Set default model and other preferences