Skip to main content
LangChain.js supports a wide range of chat model providers through dedicated integration packages. Each provider package contains the chat model implementation along with any provider-specific features.

OpenAI

GPT-4, GPT-4o, GPT-3.5 Turbo, and more

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus, and Haiku

Google

Gemini 2.0, Gemini 1.5 Pro and Flash

Mistral AI

Mistral Large, Medium, and Small models

Cohere

Command R and Command R+ models

Groq

Fast inference with Llama, Mixtral, and Gemma

OpenAI

The @langchain/openai package provides access to OpenAI’s chat models including GPT-4, GPT-4o, and GPT-3.5 Turbo.

Installation

npm install @langchain/openai

Usage

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o",
  temperature: 0.7,
  apiKey: process.env.OPENAI_API_KEY, // defaults to process.env.OPENAI_API_KEY
});

const response = await model.invoke("Tell me a joke about programming");
console.log(response.content);

Streaming

const stream = await model.stream("Count from 1 to 10");

for await (const chunk of stream) {
  console.log(chunk.content);
}

Tool Calling

import { z } from "zod";
import { tool } from "@langchain/core/tools";

const weatherTool = tool(
  async ({ location }) => {
    return `The weather in ${location} is sunny and 72°F`;
  },
  {
    name: "get_weather",
    description: "Get the current weather for a location",
    schema: z.object({
      location: z.string().describe("The city and state, e.g. San Francisco, CA"),
    }),
  }
);

const modelWithTools = model.bindTools([weatherTool]);
const result = await modelWithTools.invoke("What's the weather in San Francisco?");

Anthropic

The @langchain/anthropic package provides access to Anthropic’s Claude models.

Installation

npm install @langchain/anthropic

Usage

import { ChatAnthropic } from "@langchain/anthropic";

const model = new ChatAnthropic({
  model: "claude-3-5-sonnet-20241022",
  temperature: 0.7,
  apiKey: process.env.ANTHROPIC_API_KEY,
});

const response = await model.invoke("Explain quantum computing in simple terms");
console.log(response.content);

Vision Support

import { HumanMessage } from "@langchain/core/messages";

const response = await model.invoke([
  new HumanMessage({
    content: [
      { type: "text", text: "What's in this image?" },
      {
        type: "image_url",
        image_url: {
          url: "https://example.com/image.jpg",
        },
      },
    ],
  }),
]);

Google

The @langchain/google-genai package provides access to Google’s Gemini models.

Installation

npm install @langchain/google-genai

Usage

import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const model = new ChatGoogleGenerativeAI({
  model: "gemini-2.0-flash-exp",
  temperature: 0.7,
  apiKey: process.env.GOOGLE_API_KEY,
});

const response = await model.invoke("Write a haiku about coding");
console.log(response.content);

Mistral AI

The @langchain/mistralai package provides access to Mistral’s models.

Installation

npm install @langchain/mistralai

Usage

import { ChatMistralAI } from "@langchain/mistralai";

const model = new ChatMistralAI({
  model: "mistral-large-latest",
  temperature: 0.7,
  apiKey: process.env.MISTRAL_API_KEY,
});

const response = await model.invoke("Explain the concept of recursion");
console.log(response.content);

Cohere

The @langchain/cohere package provides access to Cohere’s Command models.

Installation

npm install @langchain/cohere

Usage

import { ChatCohere } from "@langchain/cohere";

const model = new ChatCohere({
  model: "command-r-plus",
  temperature: 0.7,
  apiKey: process.env.COHERE_API_KEY,
});

const response = await model.invoke("Summarize the benefits of TypeScript");
console.log(response.content);

Groq

The @langchain/groq package provides fast inference with various open-source models.

Installation

npm install @langchain/groq

Usage

import { ChatGroq } from "@langchain/groq";

const model = new ChatGroq({
  model: "llama-3.3-70b-versatile",
  temperature: 0.7,
  apiKey: process.env.GROQ_API_KEY,
});

const response = await model.invoke("What are the advantages of functional programming?");
console.log(response.content);

Additional Providers

AWS Bedrock

@langchain/aws - Access Claude, Llama, and other models via AWS

Azure OpenAI

@langchain/openai - Use OpenAI models through Azure

Ollama

@langchain/ollama - Run local models with Ollama

DeepSeek

@langchain/deepseek - DeepSeek models for reasoning

Cerebras

@langchain/cerebras - Fast inference with Cerebras hardware

xAI

@langchain/xai - Grok models from xAI

Community Integrations

Additional chat models are available in @langchain/community:
npm install @langchain/community
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
import { ChatDeepInfra } from "@langchain/community/chat_models/deepinfra";
import { ChatMinimax } from "@langchain/community/chat_models/minimax";

Common Features

All chat models in LangChain.js implement the BaseChatModel interface and support:
  • Invoke: Single message generation
  • Stream: Token-by-token streaming
  • Batch: Process multiple inputs in parallel
  • Tool Calling: Function/tool invocation (where supported by provider)
  • Structured Output: Extract structured data with schemas
  • Vision: Image understanding (where supported by provider)
  • Callbacks: Track tokens, timing, and errors

Best Practices

  1. Use environment variables for API keys
  2. Enable streaming for better user experience
  3. Set appropriate timeouts for production applications
  4. Monitor token usage to control costs
  5. Handle rate limits with retries and backoff
  6. Use batch processing when possible for efficiency

Next Steps

Working with Chat Models

Learn to use chat models effectively

Building Agents

Build autonomous agents with tool calling

Prompt Engineering

Create reusable prompts for your models

Embeddings

Generate embeddings for semantic search

Build docs developers (and LLMs) love