Setup
- Get an API key from console.mistral.ai.
- Set the environment variable:
Configuration
- Inline
- Named model
Available models
| Model | Description | Context |
|---|---|---|
mistral-large-latest | Most capable Mistral model | 128K |
mistral-medium-latest | Balanced performance and cost | 128K |
mistral-small-latest | Fast and cost-effective | 128K |
codestral-latest | Optimized for code generation | 32K |
open-mistral-nemo | Open-weight model | 128K |
ministral-8b-latest | Compact 8B parameter model | 128K |
ministral-3b-latest | Smallest Mistral model | 128K |
Thinking budget
Mistral models support thinking mode with effort level strings. The default effort ismedium.
Auto-detection
When runningdocker agent run without a config file, docker-agent automatically detects available providers. If MISTRAL_API_KEY is set and higher-priority providers (OpenAI, Anthropic, Google) are not configured, Mistral is selected with mistral-small-latest as the default model.
How it works
Mistral is a built-in alias provider:- API type:
openai_chatcompletions - Base URL:
https://api.mistral.ai/v1 - Token variable:
MISTRAL_API_KEY