Skip to main content
This integration connects Mistral AI’s models to LangChain.

Installation

pip install -U langchain-mistralai

Setup

Set your Mistral API key as an environment variable:
export MISTRAL_API_KEY="your-api-key"
Get your API key from console.mistral.ai.

Usage

from langchain_mistralai import ChatMistralAI

model = ChatMistralAI(
    model="mistral-large-latest",
    temperature=0,
    max_tokens=None,
)

messages = [
    ("system", "You are a helpful assistant."),
    ("human", "What is the capital of France?"),
]

response = model.invoke(messages)
print(response.content)

Streaming

for chunk in model.stream(messages):
    print(chunk.content, end="")

API Reference

ChatMistralAI

model
str
default:"mistral-small"
Model name to use (e.g., mistral-large-latest, mistral-small-latest, codestral-latest).
temperature
float
default:"0.7"
Sampling temperature. Controls randomness in generation.
max_tokens
int | None
default:"None"
Maximum number of tokens to generate.
top_p
float
default:"1"
Nucleus sampling parameter. Considers the smallest set of tokens whose probability sum is at least top_p. Must be in [0.0, 1.0].
random_seed
int | None
default:"None"
Random seed for reproducible generation.
safe_mode
bool | None
default:"None"
Whether to inject a safety prompt before all conversations.
timeout
int
default:"120"
Timeout for requests in seconds.
max_retries
int
default:"5"
Maximum number of retries for failed requests.
max_concurrent_requests
int
default:"64"
Maximum number of concurrent requests.
api_key
SecretStr | None
default:"None"
Mistral API key. Automatically read from MISTRAL_API_KEY environment variable if not provided.
base_url
str | None
default:"None"
Base URL for API endpoint. Only specify if using a proxy or custom endpoint.
streaming
bool
default:"False"
Whether to stream the results or not.
model_kwargs
dict
default:"{}"
Additional model parameters not explicitly specified.

Supported Models

  • Mistral Large: Most capable model for complex tasks
  • Mistral Small: Fast, cost-effective model for simpler tasks
  • Codestral: Specialized for code generation
  • Mixtral 8x7B: Open-source mixture-of-experts model
  • Mixtral 8x22B: Larger MoE model
See docs.mistral.ai/getting-started/models for the latest models.

Features

  • Text generation
  • Function/tool calling
  • JSON mode
  • Streaming
  • Async support
  • Safe mode for content filtering
  • Fine-tuning support
Mistral AI offers both proprietary models (Mistral Large/Small) and open-source models (Mixtral series). The open-source models can also be self-hosted.

Build docs developers (and LLMs) love