Skip to main content

Overview

The Runnable interface is the foundation of LangChain.js. All major components (chat models, tools, retrievers, etc.) implement this interface, enabling seamless composition and chaining.

Core Classes

Runnable

The base class that all runnable components extend. Import:
import { Runnable } from "@langchain/core/runnables";
Key Methods:
invoke
method
Invoke the runnable with a single input
async invoke(input: RunInput, config?: RunnableConfig): Promise<RunOutput>
stream
method
Stream the runnable output
async *stream(input: RunInput, config?: RunnableConfig): AsyncGenerator<RunOutput>
batch
method
Process multiple inputs in parallel
async batch(inputs: RunInput[], config?: RunnableConfig): Promise<RunOutput[]>
pipe
method
Chain this runnable with another
pipe<NewRunOutput>(coerceable: RunnableLike<RunOutput, NewRunOutput>): Runnable<RunInput, NewRunOutput>

RunnableSequence

A sequence of runnables that execute in order, passing output to the next runnable.
import { RunnableSequence } from "@langchain/core/runnables";

const sequence = RunnableSequence.from([
  promptTemplate,
  model,
  outputParser
]);

const result = await sequence.invoke({ input: "Hello" });

RunnableParallel

Runs multiple runnables in parallel and combines their outputs.
import { RunnableParallel } from "@langchain/core/runnables";

const parallel = RunnableParallel.from({
  response: model,
  summary: summarizer,
});

const result = await parallel.invoke({ input: "text" });
// { response: "...", summary: "..." }

RunnableLambda

Wrap a function as a Runnable.
import { RunnableLambda } from "@langchain/core/runnables";

const uppercase = RunnableLambda.from((input: string) => {
  return input.toUpperCase();
});

const result = await uppercase.invoke("hello");
// "HELLO"

RunnablePassthrough

Passes input through unchanged, optionally assigning additional fields.
import { RunnablePassthrough } from "@langchain/core/runnables";

const chain = RunnablePassthrough.assign({
  extra: (input) => `Extra: ${input.text}`
}).pipe(model);

RunnableBranch

Conditionally routes to different runnables based on input.
import { RunnableBranch } from "@langchain/core/runnables";

const branch = RunnableBranch.from([
  [(input) => input.length > 100, longHandler],
  [(input) => input.length > 10, mediumHandler],
  shortHandler // default
]);

Configuration

RunnableConfig

Configuration options for runnable execution:
interface RunnableConfig {
  tags?: string[];
  metadata?: Record<string, unknown>;
  callbacks?: Callbacks;
  runName?: string;
  maxConcurrency?: number;
  recursionLimit?: number;
  configurable?: Record<string, unknown>;
  runId?: string;
  signal?: AbortSignal;
}

Examples

Basic Chain

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI();
const prompt = ChatPromptTemplate.fromTemplate("Tell me a joke about {topic}");
const parser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(parser);

const result = await chain.invoke({ topic: "cats" });
console.log(result); // "Why did the cat..."

Streaming

const stream = await chain.stream({ topic: "dogs" });

for await (const chunk of stream) {
  console.log(chunk);
}

Batch Processing

const results = await chain.batch([
  { topic: "cats" },
  { topic: "dogs" },
  { topic: "birds" }
]);

Core Concepts: Runnables

Learn about the Runnable pattern

Chat Models

Chat models are Runnables

Build docs developers (and LLMs) love