langchain package provides a universal interface for initializing chat models from any supported provider. This allows you to easily switch between different model providers without changing your code structure.
Core Functions
initChatModel
Initializes a chat model from a model name and optional provider. The function automatically infers the provider when possible and handles dynamic model loading.The model name, optionally prefixed with provider (e.g.,
"openai:gpt-4o", "anthropic:claude-3-opus-20240229", or just "gpt-4o").Explicit model provider. Supported values:
openai- OpenAI models via@langchain/openaianthropic- Anthropic models via@langchain/anthropicazure_openai- Azure OpenAI via@langchain/openaigoogle-vertexai- Google Vertex AI via@langchain/google-vertexaigoogle-vertexai-web- Google Vertex AI Web via@langchain/google-vertexai-webgoogle-genai- Google Generative AI via@langchain/google-genaibedrock- AWS Bedrock via@langchain/awscohere- Cohere via@langchain/coheremistralai- Mistral AI via@langchain/mistralaigroq- Groq via@langchain/groqollama- Ollama via@langchain/ollamacerebras- Cerebras via@langchain/cerebrasdeepseek- DeepSeek via@langchain/deepseekxai- xAI via@langchain/xaifireworks- Fireworks via@langchain/community/chat_models/fireworkstogether- Together AI via@langchain/community/chat_models/togetheraiperplexity- Perplexity via@langchain/community/chat_models/perplexity
Which model parameters are configurable at runtime:
undefined- No configurable fields (default)"any"- All fields are configurable (⚠️ Security Note: allows changingapiKey,baseUrl, etc.)string[]- Specific fields that can be configured
Prefix for configurable fields. Useful for namespacing configuration when using multiple models.
Override profiling information for the model (e.g., token limits, capabilities). If not provided, the profile is inferred from the model instance.
Additional parameters passed to the underlying chat model constructor (e.g.,
temperature, maxTokens, apiKey).Promise<ConfigurableModel> - A configurable chat model instance.
Source: libs/langchain/src/chat_models/universal.ts:719
Examples
Basic Usage
Configurable Models
Create models that can be reconfigured at runtime:Fully Configurable with Prefix
Binding Tools
Custom Model Profile
Provider Inference
The function automatically infers the provider from model name prefixes:gpt-3...,gpt-4...,gpt-5...,o1...,o3...,o4...→openaiclaude...→anthropiccommand...→cohereaccounts/fireworks...→fireworksgemini...→google-vertexaiamazon....→bedrockmistral...→mistralaisonar...,pplx...→perplexity
ConfigurableModel
TheConfigurableModel class returned by initChatModel extends BaseChatModel and provides:
Methods
invoke(input, options?)- Invoke the model with a single inputstream(input, options?)- Stream the model’s responsebatch(inputs, options?)- Process multiple inputs in batchbindTools(tools, params?)- Bind tools to the modelwithStructuredOutput(schema)- Configure structured outputwithConfig(config)- Bind configuration to the model
Properties
profile- Model profiling information (token limits, capabilities)
Security Considerations
Type Exports
ConfigurableModel
The model class returned byinitChatModel. Extends BaseChatModel with configuration support.
