Supported Providers
- Anthropic - Claude models (3.5 Sonnet, 3.7 Sonnet, Opus, Haiku)
- OpenAI - GPT models (GPT-4o, GPT-4o-mini, o1, o3-mini)
- Google AI - Gemini models (Gemini 2.0, Gemini 1.5 Pro)
- xAI - Grok models
- Zed Cloud - Proxied access to various models
- Ollama - Local open-source models
- LM Studio - Local model hosting
- DeepSeek - DeepSeek models
- Mistral - Mistral models
- OpenRouter - Access to multiple providers
- Vercel AI - Vercel AI SDK compatible models
- AWS Bedrock - Amazon Bedrock models
- Custom OpenAI-compatible APIs
Quick Start: Zed Cloud (Easiest)
Zed Cloud provides the simplest setup - no API keys required:- Sign in to Zed with your GitHub account
- Open Settings (
Cmd+,) → Language Models - Zed Cloud should be automatically configured
- Select a model and start using AI features
- Claude 3.5 Sonnet
- Claude 3.5 Haiku
- GPT-4o
- GPT-4o-mini
- Gemini 1.5 Pro
- Rate limits based on your Zed subscription
- Requires internet connection
- Limited model selection
Anthropic (Claude)
Getting an API Key
- Sign up at console.anthropic.com
- Navigate to API Keys
- Create a new API key
- Copy the key (starts with
sk-ant-)
Configuration in Zed
- Open Settings → Language Models
- Under “Anthropic”, click “Authenticate”
- Paste your API key
- Select a default model
Settings Example
- claude-3-5-sonnet-20241022 - Best balance of intelligence and speed
- claude-3-7-sonnet-20250219 - Extended thinking for complex tasks
- claude-3-5-haiku-20241022 - Fast and economical
Prompt Caching
Anthropic supports prompt caching to reduce costs on repeated context:OpenAI
Getting an API Key
- Sign up at platform.openai.com
- Navigate to API Keys
- Create a new secret key
- Copy the key (starts with
sk-)
Configuration in Zed
- Open Settings → Language Models
- Under “OpenAI”, click “Authenticate”
- Paste your API key
- Select a default model
Settings Example
- gpt-4o - Latest and most capable
- gpt-4o-mini - Fast and cost-effective
- o1 - Reasoning-focused for complex problems
Google AI (Gemini)
Getting an API Key
- Sign up at aistudio.google.com
- Click “Get API key”
- Copy the generated key
Configuration in Zed
- Open Settings → Language Models
- Under “Google AI”, click “Authenticate”
- Paste your API key
Settings Example
- gemini-2.0-flash-exp - Fast multimodal model
- gemini-1.5-pro-002 - Extremely large context window (2M tokens)
Ollama (Local Models)
Installation
- Download Ollama from ollama.ai
- Install the application
- Pull a model:
ollama pull qwen2.5-coder:32b - Start Ollama (usually starts automatically)
Configuration in Zed
Ollama is auto-discovered onhttp://localhost:11434:
- qwen2.5-coder:32b - Excellent for code generation
- deepseek-r1:32b - Strong reasoning capabilities
- llama3.2:3b - Fast and lightweight
- Complete privacy (runs locally)
- No API costs
- No rate limits
- Works offline
- Powerful GPU recommended (32B models need ~20GB VRAM)
- Or use CPU with sufficient RAM (slower)
Custom OpenAI-Compatible APIs
Connect to any OpenAI-compatible API:- LM Studio
- Text Generation Web UI
- vLLM
- LocalAI
- Any OpenAI-compatible inference server
AWS Bedrock
Prerequisites
- AWS account with Bedrock access
- AWS CLI installed and configured
- Model access enabled in Bedrock console
Configuration
named_profile- Use AWS CLI profilesso- AWS SSOapi_key- Access key and secretdefault- Environment variables or instance role
OpenRouter
Access multiple providers through a single API:Getting an API Key
- Sign up at openrouter.ai
- Navigate to Keys
- Create a new key
Configuration
- Single API key for multiple providers
- Automatic fallback between providers
- Pay-as-you-go pricing
Comparing Providers
| Provider | Best For | Privacy | Cost | Offline |
|---|---|---|---|---|
| Zed Cloud | Easiest setup | Moderate | Included | No |
| Anthropic | Quality & reasoning | Low | $$$ | No |
| OpenAI | General purpose | Low | $$$ | No |
| Google AI | Large context | Low | $$ | No |
| Ollama | Privacy & offline | High | Free | Yes |
| Bedrock | Enterprise | High | $$$ | No |
| OpenRouter | Flexibility | Low | $$ | No |
Troubleshooting
”Missing API key” error
- Check Settings → Language Models
- Click “Authenticate” for your provider
- Ensure the key is correct (no extra spaces)
- Try generating a new API key
”Rate limit exceeded”
- Wait before retrying
- Check your provider’s usage limits
- Consider upgrading your account
- Use a different provider
”Model not found”
- Verify the model name exactly matches the provider’s API
- Check that you have access to the model
- For Bedrock, ensure model access is enabled
Ollama not connecting
- Ensure Ollama is running:
ollama list - Check the API URL:
http://localhost:11434 - Try pulling the model:
ollama pull <model-name> - Check firewall settings
Slow responses
- For local models: upgrade hardware or use smaller models
- For API providers: check internet connection
- Try a different model (e.g., mini variants)
Best Practices
- Start with Zed Cloud for quick setup
- Keep API keys secure - never commit them to git
- Use environment variables for shared configurations
- Monitor costs on paid providers
- Try local models for privacy-sensitive projects
- Use different providers for different tasks
- Cache aggressively with Anthropic to reduce costs
