Skip to main content
Cluely supports multiple AI providers, each with unique advantages. Choose based on your priorities: privacy, cost, performance, or specialized capabilities.

Available Providers

Google Gemini

Latest AI with vision capabilities and fastest responses

Ollama

100% private local AI - data never leaves your computer

OpenRouter

Access multiple AI models through a unified API

K2 Think V2

High-reasoning AI with advanced problem-solving capabilities

Quick Comparison

FeatureGeminiOllamaOpenRouterK2 Think V2
PrivacyCloud100% LocalCloudCloud
CostAPI costsFreeAPI costsAPI costs
SpeedVery FastFastFastModerate
OfflineNoYesNoNo
VisionNativeLimitedModel-dependentOCR-based
ReasoningExcellentGoodVariesAdvanced

When to Use Each Provider

Choose Gemini If:

  • You need the fastest, most accurate responses
  • Vision capabilities (image analysis) are essential
  • You’re comfortable with cloud-based processing
  • You want the latest AI technology

Choose Ollama If:

  • Privacy is your top priority
  • You want zero API costs
  • You need offline functionality
  • You have sufficient local compute resources (8GB+ RAM recommended)

Choose OpenRouter If:

  • You want flexibility to switch between multiple models
  • You prefer a unified API for different providers
  • You want to experiment with various AI models

Choose K2 Think V2 If:

  • You need advanced reasoning capabilities
  • Complex problem-solving is your primary use case
  • You’re working on high-level analytical tasks

Provider Capabilities

Gemini: Native vision capabilities with direct image processingOllama: Limited - text extraction onlyOpenRouter: Depends on selected model (e.g., GPT-4 Vision)K2 Think V2: Uses free local OCR (Tesseract.js) to extract text from images

Switching Providers

Cluely allows you to switch between providers at runtime:
// Switch to Gemini
await llmHelper.switchToGemini(apiKey, model)

// Switch to Ollama
await llmHelper.switchToOllama(model, url)

// Switch to OpenRouter
await llmHelper.switchToOpenRouter(apiKey, model)

// Switch to K2 Think V2
await llmHelper.switchToK2Think(apiKey, model)
When switching providers, ensure the target provider is properly configured with API keys or local services running.

Cost Considerations

Free Options

  • Ollama: Completely free, runs locally
  • Gemini: Free tier available (limited requests)
  • Gemini: Pay-per-use pricing (see Google AI Studio)
  • OpenRouter: Varies by model selected
  • K2 Think V2: API pricing applies

Next Steps

Set Up Gemini

Configure Google Gemini for cloud-based AI

Set Up Ollama

Install and configure local AI with Ollama

Build docs developers (and LLMs) love