Skip to main content

Quickstart

This guide walks you through building your first LangChain.js application. You’ll learn how to:
  • Initialize a chat model
  • Create and use prompt templates
  • Chain components together
  • Parse model outputs
By the end, you’ll have a working LangChain application that can answer questions using an LLM.

Prerequisites

Make sure you’ve completed the Installation guide and have:
  • LangChain.js installed (langchain)
  • At least one model provider installed (e.g., @langchain/openai)
  • API keys configured in your .env file

Basic Model Invocation

Let’s start with the simplest example: invoking a chat model directly.
1

Import the chat model

First, import the chat model class from your chosen provider:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
2

Initialize the model

Create an instance of the chat model:
const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});
3

Invoke the model

Call the model with a prompt:
const response = await model.invoke("What is LangChain?");
console.log(response.content);
The model returns an AIMessage object. Access the content with .content.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const response = await model.invoke("What is LangChain?");
console.log(response.content);
// Output: LangChain is a framework for building LLM-powered applications...

Using initChatModel

The initChatModel function provides a unified way to initialize any chat model. This makes it easy to swap between providers:
import "dotenv/config";
import { initChatModel } from "langchain/chat_models/universal";

// Initialize with automatic provider inference
const model = await initChatModel("gpt-4o-mini", {
  temperature: 0,
});

// Or explicitly specify the provider
const anthropicModel = await initChatModel("claude-3-5-sonnet-20241022", {
  modelProvider: "anthropic",
  temperature: 0,
});

// Or use the provider:model format
const geminiModel = await initChatModel("google-genai:gemini-1.5-pro", {
  temperature: 0,
});

const response = await model.invoke("Hello!");
console.log(response.content);
initChatModel automatically infers the provider from common model name prefixes (e.g., gpt-* → OpenAI, claude-* → Anthropic, gemini-* → Google Vertex AI).

Creating a Prompt Template

Prompt templates help you structure and reuse prompts with dynamic variables:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

// Create a prompt template with a system message and user message
const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a world-class technical documentation writer."],
  ["user", "{input}"],
]);

// Use the prompt
const formattedPrompt = await prompt.invoke({
  input: "What is LangChain?",
});

const response = await model.invoke(formattedPrompt);
console.log(response.content);

Chaining Components with LCEL

LangChain Expression Language (LCEL) lets you chain components together using the pipe operator:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a world-class technical documentation writer."],
  ["user", "{input}"],
]);

const outputParser = new StringOutputParser();

// Chain components together
const chain = prompt.pipe(model).pipe(outputParser);

// Invoke the chain
const result = await chain.invoke({
  input: "What is LangChain?",
});

console.log(result);
// Output: LangChain is a framework for building...
The StringOutputParser extracts the string content from the AIMessage object, making the output easier to work with.

Streaming Responses

For better user experience, stream responses as they’re generated:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant."],
  ["user", "{input}"],
]);

const chain = prompt.pipe(model).pipe(new StringOutputParser());

// Stream the response
const stream = await chain.stream({
  input: "Write a short poem about TypeScript.",
});

for await (const chunk of stream) {
  process.stdout.write(chunk);
}

Building a Complete Example

Let’s put it all together with a more realistic example that answers questions about a specific topic:
app.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

// Initialize the model
const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

// Create a prompt template
const prompt = ChatPromptTemplate.fromMessages([
  [
    "system",
    "You are an expert assistant that provides clear, accurate answers about {topic}. " +
    "Keep your responses concise and informative.",
  ],
  ["user", "{question}"],
]);

// Build the chain
const chain = prompt.pipe(model).pipe(new StringOutputParser());

// Function to ask questions
async function askQuestion(topic: string, question: string) {
  console.log(`\nTopic: ${topic}`);
  console.log(`Question: ${question}`);
  console.log("Answer:");
  
  const stream = await chain.stream({ topic, question });
  
  for await (const chunk of stream) {
    process.stdout.write(chunk);
  }
  
  console.log("\n" + "=".repeat(80));
}

// Ask multiple questions
await askQuestion(
  "TypeScript",
  "What are the main benefits of using TypeScript?"
);

await askQuestion(
  "LangChain",
  "How does LangChain help with building LLM applications?"
);

await askQuestion(
  "software architecture",
  "What is the difference between monolithic and microservices architecture?"
);
Run this example:
npx tsx app.ts

Understanding the Runnable Interface

All LangChain components implement the Runnable interface, which provides three main methods:
import { Runnable } from "@langchain/core/runnables";

// Single invocation
const result = await chain.invoke({ input: "Hello" });

// Batch processing
const results = await chain.batch([
  { input: "Hello" },
  { input: "Hi" },
  { input: "Hey" },
]);

// Streaming
const stream = await chain.stream({ input: "Hello" });
for await (const chunk of stream) {
  console.log(chunk);
}

invoke

Process a single input and return the complete result

batch

Process multiple inputs in parallel for better performance

stream

Stream results as they’re generated for responsive UIs

Working with Messages

LangChain provides message classes for structured conversations:
import { ChatOpenAI } from "@langchain/openai";
import {
  HumanMessage,
  AIMessage,
  SystemMessage,
} from "@langchain/core/messages";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
});

const messages = [
  new SystemMessage("You are a helpful assistant."),
  new HumanMessage("What is the capital of France?"),
  new AIMessage("The capital of France is Paris."),
  new HumanMessage("What is its population?"),
];

const response = await model.invoke(messages);
console.log(response.content);
Message history allows the model to maintain context across multiple turns of conversation.

Error Handling

Always handle potential errors when working with LLMs:
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

try {
  const response = await model.invoke("Hello!");
  console.log(response.content);
} catch (error) {
  if (error instanceof Error) {
    console.error("Error invoking model:", error.message);
    
    // Check for specific error types
    if (error.message.includes("API key")) {
      console.error("Please check your API key configuration.");
    } else if (error.message.includes("rate limit")) {
      console.error("Rate limit exceeded. Please try again later.");
    }
  }
}

Configuration and Performance

Temperature

Control randomness in model outputs:
// More deterministic (good for factual responses)
const deterministicModel = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

// More creative (good for creative writing)
const creativeModel = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0.9,
});

Max Tokens

Limit response length:
const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  maxTokens: 100, // Limit to 100 tokens
});

Timeouts and Retries

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
  timeout: 30000, // 30 second timeout
  maxRetries: 3,  // Retry up to 3 times on failure
});

Next Steps

Now that you’ve built your first LangChain application, explore more advanced topics:

Prompt Engineering

Learn advanced techniques for crafting effective prompts

Output Parsing

Structure and validate model outputs

Retrieval-Augmented Generation

Build applications that use external knowledge bases

Building Agents

Create autonomous agents that can use tools and make decisions

LangSmith Integration

Debug and monitor your LangChain applications

Production Deployment

Deploy your LangChain apps to production

Additional Resources

API Reference

Complete API documentation

Example Gallery

Browse ready-to-use examples

Community Forum

Get help and share ideas

Build docs developers (and LLMs) love