Skip to main content

Overview

Chat models take messages as input and return messages as output. They support streaming, tool calling, and structured output.

BaseChatModel

The abstract base class that all chat models extend. Import:
import { BaseChatModel } from "@langchain/core/language_models/chat_models";

Methods

invoke
method
Generate a response for a list of messages
async invoke(
  messages: BaseMessage[],
  options?: RunnableConfig
): Promise<AIMessage>
stream
method
Stream the response token by token
async *stream(
  messages: BaseMessage[],
  options?: RunnableConfig
): AsyncGenerator<AIMessageChunk>
batch
method
Process multiple message lists in parallel
async batch(
  messageLists: BaseMessage[][],
  options?: RunnableConfig
): Promise<AIMessage[]>
bindTools
method
Bind tools for tool calling
bindTools(tools: ToolDefinition[]): BaseChatModel
withStructuredOutput
method
Get structured output matching a schema
withStructuredOutput<T>(
  schema: StructuredOutputSchema<T>
): Runnable<BaseMessage[], T>

Common Parameters

temperature
number
Sampling temperature (0-2). Higher values make output more random. Default: 1.0
maxTokens
number
Maximum tokens to generate
timeout
number
Request timeout in milliseconds
maxRetries
number
Maximum retry attempts on failure Default: 2

Examples

Basic Usage

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o",
  temperature: 0.7,
  maxTokens: 500
});

const response = await model.invoke([
  ["system", "You are a helpful assistant"],
  ["human", "Tell me a joke"]
]);

console.log(response.content);

Streaming

const stream = await model.stream([
  ["human", "Write a poem about the ocean"]
]);

for await (const chunk of stream) {
  console.log(chunk.content);
}

Tool Calling

import { tool } from "@langchain/core/tools";
import { z } from "zod";

const calculator = tool(
  async ({ expression }) => {
    return String(eval(expression));
  },
  {
    name: "calculator",
    description: "Evaluate a mathematical expression",
    schema: z.object({
      expression: z.string().describe("The expression to evaluate")
    })
  }
);

const modelWithTools = model.bindTools([calculator]);

const response = await modelWithTools.invoke([
  ["human", "What's 25 * 4?"]
]);

if (response.tool_calls && response.tool_calls.length > 0) {
  console.log("Tool calls:", response.tool_calls);
}

Structured Output

import { z } from "zod";

const schema = z.object({
  name: z.string(),
  age: z.number(),
  email: z.string().email()
});

const structuredModel = model.withStructuredOutput(schema);

const result = await structuredModel.invoke([
  ["human", "Extract info: John Doe is 30 years old. Email: [email protected]"]
]);

console.log(result);
// { name: "John Doe", age: 30, email: "[email protected]" }

Batch Processing

const responses = await model.batch([
  [["human", "What is 2+2?"]],
  [["human", "What is 3+3?"]],
  [["human", "What is 4+4?"]]
]);

responses.forEach((response, i) => {
  console.log(`Response ${i + 1}:`, response.content);
});

Response Metadata

AI messages include response metadata:
const response = await model.invoke([["human", "Hi"]]);

console.log(response.response_metadata);
// {
//   model_name: "gpt-4o",
//   finish_reason: "stop",
//   usage: { prompt_tokens: 10, completion_tokens: 20, total_tokens: 30 }
// }

Implementing Custom Chat Models

Extend BaseChatModel to create custom implementations:
import { BaseChatModel } from "@langchain/core/language_models/chat_models";
import { AIMessage } from "@langchain/core/messages";

class CustomChatModel extends BaseChatModel {
  _llmType() {
    return "custom";
  }

  async _generate(
    messages: BaseMessage[],
    options?: this["ParsedCallOptions"]
  ): Promise<ChatResult> {
    // Your implementation
    return {
      generations: [{
        message: new AIMessage("Custom response"),
        text: "Custom response"
      }]
    };
  }
}

Chat Model Integrations

Available provider integrations

Working with Chat Models

Complete guide to chat models

Build docs developers (and LLMs) love