Skip to main content
GitWhisper supports 9 different AI providers, giving you flexibility to choose the model that best fits your needs, budget, and preferences.

Supported Providers

OpenAI

GPT-4 models for high-quality commit messagesDefault: gpt-4o

Anthropic Claude

Advanced reasoning with Claude SonnetDefault: claude-sonnet-4-5

Google Gemini

Fast and efficient with Gemini FlashDefault: gemini-2.0-flash

xAI Grok

Grok’s latest modelsDefault: grok-2-latest

Meta Llama

Open-weight models via Llama APIDefault: llama-3-70b-instruct

DeepSeek

Cost-effective Chinese AI providerDefault: deepseek-chat

GitHub Models

AI models via GitHub’s platformDefault: gpt-4o

Ollama

Run models locally on your machineDefault: llama3.2:latest

Free (LLM7.io)

No API key required - free tierDefault: N/A

Quick Start

Using a Specific Model

Specify a model with the --model or -m flag:
# Use Claude
gw commit --model claude

# Use Gemini
gw commit -m gemini

# Use free tier (no API key needed)
gw commit --model free

Model Variants

Each provider supports multiple model variants. Specify with --model-variant or -v:
# Use GPT-4o (default OpenAI)
gw commit --model openai --model-variant gpt-4o

# Use GPT-4o mini (faster, cheaper)
gw commit --model openai --model-variant gpt-4o-mini

# Use Claude Opus (most capable)
gw commit --model claude --model-variant claude-opus-4-20250514

Provider Details

OpenAI

OpenAI’s GPT models are known for high-quality, coherent commit messages.Default Variant: gpt-4oAPI Key Required: YesAPI Endpoint: https://api.openai.com/v1/chat/completions

Anthropic Claude

Claude excels at understanding context and writing clear, technical commit messages.Default Variant: claude-sonnet-4-5-20250929API Key Required: YesAPI Endpoint: https://api.anthropic.com/v1/messages

Google Gemini

Gemini offers fast response times with good quality output.Default Variant: gemini-2.0-flashAPI Key Required: YesAPI Endpoint: https://generativelanguage.googleapis.com/v1beta/models/

xAI Grok

Grok models from xAI, known for being up-to-date and witty.Default Variant: grok-2-latestAPI Key Required: Yes

Meta Llama

Access to Meta’s Llama models via API.Default Variant: llama-3-70b-instructAPI Key Required: Yes

DeepSeek

Cost-effective AI models with good performance.Default Variant: deepseek-chatAPI Key Required: Yes

GitHub Models

Access AI models through GitHub’s platform.Default Variant: gpt-4oAPI Key Required: Yes (GitHub token)

Ollama (Local)

Run AI models locally without sending data to external APIs.Default Variant: llama3.2:latestAPI Key Required: NoRequirements: Ollama must be installed and running

Free Tier (LLM7.io)

Free, anonymous access to AI models - no API key required.Powered by: LLM7.ioAPI Key Required: NoLimitations:
  • 8,000 characters per request
  • 60 requests/hour
  • 10 requests/minute
  • 1 request/second

Setting Default Model

Set your preferred model as the default:
# Set default model
gw set-defaults --model claude

# Set default model with variant
gw set-defaults --model openai --model-variant gpt-4o-mini

# View current defaults
gw show-defaults
Once set, you can simply run:
gw commit  # Uses your default model

Model Comparison

Based on commit message quality:
  1. Claude (Opus/Sonnet) - Most detailed and contextual
  2. OpenAI (GPT-4o) - Excellent balance of quality and speed
  3. Gemini (2.0 Flash) - Good quality, very fast
  4. Grok - Good quality with personality
  5. DeepSeek - Solid quality, budget-friendly
  6. Llama - Good for technical commits
  7. GitHub Models - Similar to OpenAI
  8. Ollama - Varies by model, privacy-focused
  9. Free - Basic quality, rate-limited

Code Implementation

The model factory is implemented in commit_generator_factory.dart:22:
class CommitGeneratorFactory {
  static CommitGenerator create(
    String model,
    String? apiKey, {
    String? variant,
    String? baseUrl,
  }) {
    return switch (model.toLowerCase()) {
      'claude' => ClaudeGenerator(apiKey, variant: variant),
      'openai' => OpenAIGenerator(apiKey, variant: variant),
      'gemini' => GeminiGenerator(apiKey, variant: variant),
      'grok' => GrokGenerator(apiKey, variant: variant),
      'llama' => LlamaGenerator(apiKey, variant: variant),
      'deepseek' => DeepseekGenerator(apiKey, variant: variant),
      'github' => GithubGenerator(apiKey, variant: variant),
      'ollama' => OllamaGenerator(baseUrl!, apiKey, variant: variant),
      'free' => FreeGenerator(),
      _ => throw ArgumentError('Unsupported model: $model'),
    };
  }
}
Default variants are defined in model_variants.dart:10:
class ModelVariants {
  static const String openaiDefault = 'gpt-4o';
  static const String claudeDefault = 'claude-sonnet-4-5-20250929';
  static const String geminiDefault = 'gemini-2.0-flash';
  static const String grokDefault = 'grok-2-latest';
  static const String llamaDefault = 'llama-3-70b-instruct';
  static const String deepseekDefault = 'deepseek-chat';
  static const String githubDefault = 'gpt-4o';
  static const String ollamaDefault = 'llama3.2:latest';
}

Best Practices

Try Multiple Models

Different models have different strengths. Try a few to find your favorite.

Use Variants Wisely

Use faster/cheaper variants for simple commits, powerful ones for complex changes.

Go Local for Privacy

Use Ollama if you’re working with sensitive code.

Set Sensible Defaults

Configure your preferred model as default to save time.

Interactive Confirmation

Try different models during the confirmation workflow

API Key Management

Learn how to manage API keys for each provider

Configuration

Set default model and other preferences

Build docs developers (and LLMs) love