Available Models
GitWhisper supports the latest Gemini models:Gemini 2.5 Series (Latest)
The most advanced Gemini models with thinking capabilities:- gemini-2.5-pro - Advanced reasoning with thinking mode
- gemini-2.5-flash - Faster variant updated Sep 2025
- gemini-2.5-flash-lite - Most cost-efficient option
- gemini-2.5-flash-image - Specialized for image generation
- gemini-2.5-computer-use - Agent interaction capabilities
Gemini 2.0 Series
- gemini-2.0-flash ⭐ (default) - Optimized for speed and performance
- gemini-2.0-flash-lite - Lowest latency option
Gemini 1.5 Series
Previous generation with proven reliability:- gemini-1.5-pro-002 - Supports up to 2M tokens
- gemini-1.5-flash-002 - Supports up to 1M tokens
- gemini-1.5-flash-8b - Most cost-effective option
The default model gemini-2.0-flash provides excellent performance for most use cases with fast response times.
Key Features
Massive Context Windows
Gemini models stand out with industry-leading context sizes:- Gemini 1.5 Pro: Up to 2 million tokens
- Gemini 1.5 Flash: Up to 1 million tokens
- Gemini 2.0/2.5: Up to 1 million tokens
- Analyze extremely large diffs
- Process entire repositories
- Understand complex multi-file changes
Thinking Mode (2.5 Pro)
gemini-2.5-pro includes advanced reasoning with thinking mode:- Extended analysis of code changes
- Deeper understanding of implications
- More thorough commit message generation
Cost Efficiency
Gemini models are among the most cost-effective options:- Competitive pricing per token
- Large free tier available
- Excellent value for performance
Usage
Basic Usage
Generate commit messages with Gemini:Specific Variant
Choose a specific Gemini model:Set as Default
Make Gemini your default model:API Key Setup
You need a Google AI API key to use Gemini models. Get one at makersuite.google.com/app/apikey.- Save Permanently
- Environment Variable
- Command Line
~/.git_whisper.yamlModel Comparison
- Gemini 2.5 Series
- Context Comparison
- Speed & Cost
| Model | Capabilities | Speed | Context | Best For |
|---|---|---|---|---|
| 2.5 Pro | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | 1M tokens | Complex reasoning |
| 2.5 Flash | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | 1M tokens | Balanced use |
| 2.5 Flash Lite | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | 1M tokens | Cost efficiency |
Use Cases
Large Codebase Analysis
Gemini 1.5 Pro with its 2M token context is perfect for:- Repository-wide refactoring
- Large-scale architectural changes
- Multi-package updates
Fast Daily Commits
Gemini 2.0 Flash provides excellent speed:- Rapid iteration during development
- Small to medium changes
- Cost-effective for high volume
Complex Reasoning
Gemini 2.5 Pro with thinking mode excels at:- Subtle bug fixes
- Complex business logic changes
- Algorithm improvements
Code Analysis
Gemini models provide comprehensive code analysis:Analysis Capabilities
Best Practices
Choosing the Right Model
Choosing the Right Model
Use Gemini 1.5 Pro when:
- Changes span dozens of files
- Need to understand entire repository context
- Working with monorepos
- Need deep reasoning about complex logic
- Changes involve subtle implications
- Want the highest quality analysis
- Standard daily commits
- Speed is important
- Cost-effective processing
Optimizing Context Usage
Optimizing Context Usage
Even with large context windows, organize your commits:
Using Thinking Mode
Using Thinking Mode
For maximum benefit from Gemini 2.5 Pro’s thinking mode:
- Use it for non-obvious changes
- Allow extra time for processing
- Review the detailed analysis
- Use for architecture decisions
Pricing
Google AI offers competitive pricing:- Free tier: Generous quota for experimentation
- Input tokens: Charged per 1K tokens
- Output tokens: Typically lower rate
- Flash models: Most cost-effective
Troubleshooting
API Key Issues
API Key Issues
Quota Exceeded
Quota Exceeded
- Wait for quota reset (usually daily)
- Upgrade to paid tier
- Use a different model temporarily
Model Not Available
Model Not Available
Advantages Over Other Models
vs OpenAI
- Gemini: Much larger context (1-2M vs 128K)
- Gemini: More cost-effective
- OpenAI: Slightly more mature ecosystem
vs Claude
- Gemini: 10x larger context (2M vs 200K)
- Gemini: Better pricing
- Claude: Superior reasoning in some cases
vs Ollama
- Gemini: State-of-the-art capabilities
- Ollama: Complete privacy (local)
- Gemini: No hardware requirements
vs Free Model
- Gemini: Much higher quality
- Free Model: No API key needed
- Gemini: More reliable and consistent
Next Steps
All Variants
Complete list of Gemini model variants
OpenAI Models
Compare with OpenAI models
Code Analysis
Deep dive into analysis features
Configuration
API key setup guide