Supported Providers
The agent supports the following LLM providers:- Anthropic (Claude models)
- OpenAI (GPT models)
- Google (Gemini models)
- Ollama (local models)
- Custom (OpenAI-compatible or Ollama-compatible APIs)
Provider Configuration
Anthropic
Configure Anthropic’s Claude models:- Install
langchain-anthropicpackage - Provide a valid API key
- Optional: Specify a custom
base_urlfor proxy endpoints
claude-3-opus-20240229claude-3-sonnet-20240229claude-3-haiku-20240307
OpenAI
Configure OpenAI’s GPT models:- Install
langchain-openaipackage - Provide a valid API key
- Optional: Organization ID for enterprise accounts
- Optional: Custom
base_urlfor Azure OpenAI or proxy endpoints
gpt-4gpt-4-turbogpt-3.5-turbo
Google (Gemini)
Configure Google’s Gemini models:- Install
langchain-google-genaipackage - Provide a valid Google API key
gemini-2.5-progemini-1.5-progemini-1.0-pro
Ollama (Default Provider)
Ollama is the default provider for local model deployment:Custom Providers
The agent supports custom providers that implement OpenAI-compatible or Ollama-compatible APIs:openai_compatible: True in provider_config to use the OpenAI client for endpoints that support the /v1/chat/completions format.
Ollama-Compatible APIs:
Leave openai_compatible: False (default) for Ollama-style endpoints.
Default LLM Settings
The agent uses these default settings defined insrc/copilot/llm_factory.py:42-43:
Temperature
Controls randomness in model responses:- 0.0: Deterministic, focused responses
- 0.33: Balanced (default)
- 1.0: More creative, varied responses
Max Retries
Number of retry attempts for failed API calls. Default is2.
Environment Variables
Configure default settings using environment variables:Factory Function
Thecreate_llm_from_provider function (defined in src/copilot/llm_factory.py:46) creates LLM instances:
Error Handling
The factory function raisesValueError in the following cases:
-
Missing Dependencies:
-
Missing API Key:
-
Missing Base URL (Custom Provider):
-
Unsupported Provider:
Installation Requirements
Install required packages based on your chosen provider:Caching and Performance
The agent caches LLM instances for performance (seesrc/copilot/graph.py:85-86):
provider_typemodel_idbase_urltemperature
Setting LLM Configuration
Useset_llm_from_config (defined in src/copilot/graph.py:89) before invoking the agent: