Installation
Setup
Set your Mistral API key as an environment variable:Usage
Streaming
API Reference
ChatMistralAI
Model name to use (e.g.,
mistral-large-latest, mistral-small-latest, codestral-latest).Sampling temperature. Controls randomness in generation.
Maximum number of tokens to generate.
Nucleus sampling parameter. Considers the smallest set of tokens whose probability sum is at least
top_p. Must be in [0.0, 1.0].Random seed for reproducible generation.
Whether to inject a safety prompt before all conversations.
Timeout for requests in seconds.
Maximum number of retries for failed requests.
Maximum number of concurrent requests.
Mistral API key. Automatically read from
MISTRAL_API_KEY environment variable if not provided.Base URL for API endpoint. Only specify if using a proxy or custom endpoint.
Whether to stream the results or not.
Additional model parameters not explicitly specified.
Supported Models
- Mistral Large: Most capable model for complex tasks
- Mistral Small: Fast, cost-effective model for simpler tasks
- Codestral: Specialized for code generation
- Mixtral 8x7B: Open-source mixture-of-experts model
- Mixtral 8x22B: Larger MoE model
Features
- Text generation
- Function/tool calling
- JSON mode
- Streaming
- Async support
- Safe mode for content filtering
- Fine-tuning support
Mistral AI offers both proprietary models (Mistral Large/Small) and open-source models (Mixtral series). The open-source models can also be self-hosted.