Overview
Jan supports connecting to major cloud AI providers through their APIs, giving you access to state-of-the-art models like GPT-4o, Claude Opus, and more - all through Jan’s unified interface.Cloud models require internet connectivity and API keys from providers. Be aware of associated costs and rate limits.
Why Use Cloud Integration?
Latest Models
Access cutting-edge models like GPT-4o, Claude Opus 4, and o3 that require massive compute.
No Hardware Limits
Run large models without worrying about RAM, GPU, or disk space.
Unified Interface
Switch between local and cloud models seamlessly in the same conversation.
Best of Both Worlds
Use local models for privacy, cloud models for capability - all in one app.
Supported Providers
Jan integrates with these cloud providers:- OpenAI: GPT-4o, o3, o1, GPT-4 Turbo, and more
- Anthropic: Claude Opus 4, Sonnet 4, Haiku 3.5
- Groq: Ultra-fast inference for Llama, Mixtral, and other open models
- Mistral AI: Mistral Large, Medium, and specialized models
- Google: Gemini models
- Cohere: Command models
- HuggingFace: Serverless Inference API
- OpenRouter: Access to 100+ models through one API
- Any OpenAI-compatible API endpoint
Quick Start
Connect to OpenAI
Get Your API Key
- Visit OpenAI Platform
- Sign in or create an account
- Create a new API key and copy it
- Ensure your account has billing set up with credits
Configure Jan
- Open Settings in Jan
- Navigate to Model Providers > OpenAI
- Paste your API key
- Click save
Connect to Anthropic
Get Your API Key
- Visit Anthropic Console
- Sign in or create an account
- Create a new API key and copy it
- Ensure your account has credits
Configure Jan
- Open Settings in Jan
- Navigate to Model Providers > Anthropic
- Paste your API key
- Click save
Provider-Specific Setup
- OpenAI
- Anthropic
- Groq
- OpenRouter
OpenAI Models
Available models include:- GPT-4o: Most capable multimodal model
- o3/o1: Advanced reasoning models
- GPT-4 Turbo: Fast and cost-effective
- GPT-3.5 Turbo: Budget-friendly option
- Go to Settings > Model Providers > OpenAI
- Click Add Model
- Enter the model ID from OpenAI’s model list
- Save and start using it
Managing Cloud Models
Switch Between Models
You can switch models mid-conversation:- Click the model selector dropdown in the chat input
- Choose any local or cloud model
- Continue the conversation with the new model
Add Custom Cloud Models
If a specific model isn’t listed:- Go to Settings > Model Providers > [Provider]
- Click Add Model
- Enter the model details:
- ID: Model identifier from provider’s docs
- Name: Display name in Jan
- Description: Optional notes
- Save and start using it
Enable Tool Calling
Many cloud models support tool calling for web search, code execution, and more:- Navigate to Settings > Model Providers > [Provider]
- Find your model in the list
- Click the edit button or + icon
- Enable Tools capability
- Now the model can use MCP tools and extensions
OpenAI-Compatible Endpoints
Jan supports any API that follows the OpenAI API specification:Custom Endpoint Setup
- Go to Settings > Model Providers > OpenAI
- Enable Custom Endpoint
- Enter your endpoint URL (e.g.,
https://api.example.com/v1) - Add your API key
- Configure available models
Compatible Services
These services work with Jan’s OpenAI integration:- Together AI: Fast inference for open models
- DeepSeek: Chinese AI provider
- Fireworks AI: Optimized inference platform
- Perplexity: Search-augmented models
- Self-hosted APIs: llama.cpp server, vLLM, text-generation-webui
Cost Management
Monitor Usage
Best Practices:- Set up billing alerts in your provider’s console
- Start with cheaper models (GPT-3.5, Claude Haiku) for testing
- Use local models for routine tasks
- Reserve expensive models (GPT-4o, Claude Opus) for complex work
Cost Optimization Tips
- Use shorter system prompts: Every token counts
- Clear old conversations: Don’t send unnecessary context
- Choose appropriate models: Don’t use GPT-4o for simple tasks
- Enable streaming: Better user experience, same cost
- Consider Groq: Free tier for open-source models
Privacy Considerations
Data Handling
- OpenAI: Does not train on API data by default (verify current policy)
- Anthropic: Does not train on API conversations
- Groq: Check their data retention policies
- Others: Review individual provider policies
When to Use Local vs Cloud
Use Local Models For:- Sensitive personal or business data
- Offline situations
- Cost-conscious projects with high volume
- Maximum privacy requirements
- Latest capabilities and features
- Complex reasoning tasks
- Multimodal work (images, vision)
- When hardware limitations prevent local inference
Troubleshooting
API Key Issues
API Key Issues
Symptoms: 401 Unauthorized errorsSolutions:
- Verify API key is copied correctly (no extra spaces)
- Check if key is expired or revoked
- Ensure billing is set up on provider’s account
- Confirm key has access to the specific model
Model Not Found (404)
Model Not Found (404)
Symptoms: Model ID errorsSolutions:
- Check the model ID matches provider’s documentation exactly
- Verify your account has access to that model
- Some models require special access (contact provider)
- Ensure API prefix is correct in settings
Rate Limit Errors
Rate Limit Errors
Symptoms: 429 Too Many RequestsSolutions:
- Wait a moment and try again
- Check your tier limits in provider console
- Upgrade your account tier if needed
- Spread requests over time
Connection Timeouts
Connection Timeouts
Symptoms: Requests hanging or timing outSolutions:
- Check your internet connection
- Verify provider’s status page for outages
- Try a different network or disable VPN
- Check firewall/antivirus settings
Insufficient Credits
Insufficient Credits
Symptoms: Billing errorsSolutions:
- Add credits to your provider account
- Set up automatic billing
- Check if free tier has expired
- Verify payment method is valid
Advanced Configuration
Custom Headers
Some providers require custom headers:- Go to provider settings in Jan
- Look for Advanced Settings or Custom Headers
- Add required headers (e.g., organization ID, project ID)
- Save and test connection
Proxy Configuration
If you’re behind a corporate proxy:- Go to Settings > Advanced
- Configure proxy settings:
- HTTP/HTTPS proxy URL
- Authentication if required
- Test connection with a simple request
Best Practices
Start with Free Tiers
Many providers offer free credits for new accounts. Test before committing to paid usage.
Use Multiple Providers
Don’t rely on a single provider. Configure multiple options for redundancy and cost optimization.
Match Model to Task
Use appropriate models for each task:
- Simple QA: GPT-3.5, Claude Haiku
- Complex reasoning: GPT-4o, Claude Opus
- Code generation: GPT-4o, Claude Sonnet
- Fast iterations: Groq models
Next Steps
Local Models
Run models locally for privacy and zero costs
Model Parameters
Fine-tune cloud model behavior
MCP Integration
Give cloud models access to external tools
API Server
Set up your own local API endpoint