Skip to main content
Page Assist supports multiple AI providers to give you flexibility in choosing the right model for your needs. You can run models locally on your machine or connect to cloud-based API services.

Supported Providers

Local Providers

Run AI models directly on your machine without sending data to external services:
  • Ollama - Primary local provider with easy model management
  • LM Studio - Desktop application for running LLMs locally
  • Chrome AI (Gemini Nano) - Built-in browser AI (Chrome Canary)
  • LLaMA.cpp - Efficient C++ implementation for running LLaMA models
  • Llamafile - Single-file executables for LLaMA models
  • vLLM - High-performance inference server

Cloud Providers

Connect to OpenAI-compatible API endpoints:
  • OpenAI - GPT models from OpenAI
  • Anthropic (Claude) - Claude models
  • Google AI - Gemini models
  • Groq - Fast inference API
  • DeepSeek - DeepSeek models
  • Fireworks - Fast inference platform
  • Together AI - Open source model hosting
  • OpenRouter - Unified API for multiple providers
  • Mistral - Mistral AI models
  • xAI - Grok models
  • Many more OpenAI-compatible providers

Choosing a Provider

Privacy & Control

If privacy is your priority, use local providers like Ollama or LM Studio. Your data never leaves your machine.

Performance

For the fastest responses, cloud providers like Groq or OpenAI offer excellent performance. Local providers depend on your hardware.

Cost

Local providers are free but require computational resources. Cloud providers charge per token but need no local hardware.

Model Selection

Different providers offer different models:
  • Local providers: Access to open-source models (Llama, Mistral, Phi, etc.)
  • Cloud providers: Access to proprietary models (GPT-4, Claude, Gemini, etc.)

Default Configuration

Page Assist comes preconfigured with Ollama as the default provider. If Ollama is running on http://127.0.0.1:11434, Page Assist will automatically detect it.

Multiple Provider Support

You can configure multiple providers simultaneously and switch between them:
  1. Navigate to Settings
  2. Go to “OpenAI Compatible API” tab
  3. Click “Add Provider”
  4. Select your provider or choose “Custom” for unlisted services
  5. Enter the required configuration (URL, API key, etc.)
  6. Save your configuration

Model Management

Page Assist automatically fetches available models from:
  • Ollama instances
  • LM Studio
  • LLaMA.cpp servers
  • Llamafile instances
  • vLLM servers
For other providers, you may need to manually add model names.

Next Steps

Explore the setup guides for each provider:

Build docs developers (and LLMs) love