Let’s start with the simplest example: invoking a chat model directly.
1
Import the chat model
First, import the chat model class from your chosen provider:
import "dotenv/config";import { ChatOpenAI } from "@langchain/openai";
2
Initialize the model
Create an instance of the chat model:
const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0,});
3
Invoke the model
Call the model with a prompt:
const response = await model.invoke("What is LangChain?");console.log(response.content);
The model returns an AIMessage object. Access the content with .content.
import "dotenv/config";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0,});const response = await model.invoke("What is LangChain?");console.log(response.content);// Output: LangChain is a framework for building LLM-powered applications...
The initChatModel function provides a unified way to initialize any chat model. This makes it easy to swap between providers:
import "dotenv/config";import { initChatModel } from "langchain/chat_models/universal";// Initialize with automatic provider inferenceconst model = await initChatModel("gpt-4o-mini", { temperature: 0,});// Or explicitly specify the providerconst anthropicModel = await initChatModel("claude-3-5-sonnet-20241022", { modelProvider: "anthropic", temperature: 0,});// Or use the provider:model formatconst geminiModel = await initChatModel("google-genai:gemini-1.5-pro", { temperature: 0,});const response = await model.invoke("Hello!");console.log(response.content);
initChatModel automatically infers the provider from common model name prefixes (e.g., gpt-* → OpenAI, claude-* → Anthropic, gemini-* → Google Vertex AI).
Prompt templates help you structure and reuse prompts with dynamic variables:
import "dotenv/config";import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0,});// Create a prompt template with a system message and user messageconst prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a world-class technical documentation writer."], ["user", "{input}"],]);// Use the promptconst formattedPrompt = await prompt.invoke({ input: "What is LangChain?",});const response = await model.invoke(formattedPrompt);console.log(response.content);
LangChain Expression Language (LCEL) lets you chain components together using the pipe operator:
import "dotenv/config";import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";const model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0,});const prompt = ChatPromptTemplate.fromMessages([ ["system", "You are a world-class technical documentation writer."], ["user", "{input}"],]);const outputParser = new StringOutputParser();// Chain components togetherconst chain = prompt.pipe(model).pipe(outputParser);// Invoke the chainconst result = await chain.invoke({ input: "What is LangChain?",});console.log(result);// Output: LangChain is a framework for building...
The StringOutputParser extracts the string content from the AIMessage object, making the output easier to work with.
Let’s put it all together with a more realistic example that answers questions about a specific topic:
app.ts
import "dotenv/config";import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";// Initialize the modelconst model = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0,});// Create a prompt templateconst prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are an expert assistant that provides clear, accurate answers about {topic}. " + "Keep your responses concise and informative.", ], ["user", "{question}"],]);// Build the chainconst chain = prompt.pipe(model).pipe(new StringOutputParser());// Function to ask questionsasync function askQuestion(topic: string, question: string) { console.log(`\nTopic: ${topic}`); console.log(`Question: ${question}`); console.log("Answer:"); const stream = await chain.stream({ topic, question }); for await (const chunk of stream) { process.stdout.write(chunk); } console.log("\n" + "=".repeat(80));}// Ask multiple questionsawait askQuestion( "TypeScript", "What are the main benefits of using TypeScript?");await askQuestion( "LangChain", "How does LangChain help with building LLM applications?");await askQuestion( "software architecture", "What is the difference between monolithic and microservices architecture?");
LangChain provides message classes for structured conversations:
import { ChatOpenAI } from "@langchain/openai";import { HumanMessage, AIMessage, SystemMessage,} from "@langchain/core/messages";const model = new ChatOpenAI({ model: "gpt-4o-mini",});const messages = [ new SystemMessage("You are a helpful assistant."), new HumanMessage("What is the capital of France?"), new AIMessage("The capital of France is Paris."), new HumanMessage("What is its population?"),];const response = await model.invoke(messages);console.log(response.content);
Message history allows the model to maintain context across multiple turns of conversation.
// More deterministic (good for factual responses)const deterministicModel = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0,});// More creative (good for creative writing)const creativeModel = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0.9,});