Overview
- Type: Cloud provider
- Cost: Free tier available, pay-per-use for higher usage (see pricing)
- API Key Required: Yes
- Installation Required: No
- Official Website: https://ai.google.dev/
Prerequisites
Access Google AI Studio
Go to Google AI Studio and sign in with your Google account.
Generate an API key
Navigate to Get API key and create a new API key. Copy it for use in AI Providers.
Google Gemini offers a generous free tier with rate limits suitable for personal use and development.
Setup in AI Providers
Select Google Gemini provider
In the AI Providers settings, click Create AI provider and select
Google Gemini as the provider type.Enter API key
Paste your API key from the API keys page into the
API key field.Select model
Click the refresh button to fetch available models, then select your preferred model (e.g.,
gemini-1.5-flash).Recommended Models
| Model | Context Window | Description | Best For |
|---|---|---|---|
gemini-2.0-flash-exp | 1M tokens | Latest experimental flash model | Fast, experimental features |
gemini-1.5-flash | 1M tokens | Fast and efficient | Most tasks, great value |
gemini-1.5-flash-8b | 1M tokens | Smaller, faster variant | Simple tasks, high volume |
gemini-1.5-pro | 2M tokens | Highest quality | Complex reasoning, analysis |
gemini-exp-1206 | 2M tokens | Experimental with thinking | Advanced reasoning |
Gemini models have some of the largest context windows available (up to 2 million tokens), making them excellent for processing large documents.
Key Features
Massive Context Windows
Gemini 1.5 models support:- 1-2 million token context windows
- Process entire codebases
- Analyze multiple long documents simultaneously
Multimodal Capabilities
All Gemini models support:- Text generation
- Vision and image understanding
- Code generation and analysis
Free Tier
Google offers a generous free tier:- 15 requests per minute (RPM)
- 1 million tokens per minute (TPM)
- 1,500 requests per day (RPD)
- Personal projects
- Development and testing
- Small-scale applications
Troubleshooting
API Key Issues
If your API key isn’t working:- Verify you created the key in Google AI Studio
- Check that the key hasn’t been restricted or deleted
- Ensure you’re using the correct endpoint URL
Rate Limit Exceeded
If you hit rate limits on the free tier:Model Not Available
If a model doesn’t appear:- Click the refresh button in AI Providers
- Some models may be experimental and require special access
- Check Google AI documentation for model availability
Context Length Errors
If you exceed context limits:- Gemini 1.5 Flash supports up to 1M tokens
- Gemini 1.5 Pro supports up to 2M tokens
- Break very large inputs into chunks if needed
Pricing Considerations
For paid usage, Google Gemini offers competitive pricing: Gemini 1.5 Flash:- Extremely cost-effective
- Great for high-volume applications
- Input and output tokens charged separately
- Higher quality, slightly higher cost
- Still competitive with other premium models
- Best value for complex tasks
- Use Flash for most tasks
- Reserve Pro for complex reasoning
- Use the free tier for development
- Monitor usage in Google Cloud Console
Advanced Configuration
Response Configuration
Customize model behavior:temperature- Control randomness (0.0-2.0)top_p- Nucleus sampling parametertop_k- Top-k sampling parametermax_tokens- Maximum response length
Safety Settings
Gemini has built-in safety filters. If responses are blocked:- Review your prompts for potentially sensitive content
- Adjust safety settings in your API calls if needed
- Check the safety documentation
System Instructions
Gemini supports system instructions (similar to system prompts) to:- Define the model’s behavior
- Set context and constraints
- Specify output formats
Best Practices
- Leverage the context window: Gemini can handle extremely long inputs
- Start with Flash: Use Flash model for most tasks, upgrade to Pro when needed
- Use the free tier: Great for development and personal use
- Monitor quotas: Check your usage in Google AI Studio
- Optimize prompts: Clear, specific prompts get better results
OpenAI Compatibility
Google Gemini provides an OpenAI-compatible endpoint, which is what AI Providers uses. This means:- Similar API structure to OpenAI
- Easy migration from/to OpenAI
- Compatible with OpenAI-based tools
The endpoint
https://generativelanguage.googleapis.com/v1beta/openai provides OpenAI-compatible access to Gemini models.