Skip to main content

Overview

The LLM module provides the foundational interfaces and base classes for integrating language models in LlamaIndex.TS. All LLM providers (OpenAI, Anthropic, etc.) extend these base classes.

BaseLLM

Abstract base class that all LLM implementations extend.
import { BaseLLM } from "@llamaindex/core/llms";

Methods

chat
method
Get a chat response from the LLMStreaming:
chat(params: LLMChatParamsStreaming): Promise<AsyncIterable<ChatResponseChunk>>
Non-streaming:
chat(params: LLMChatParamsNonStreaming): Promise<ChatResponse>
complete
method
Get a prompt completion from the LLMStreaming:
complete(params: LLMCompletionParamsStreaming): Promise<AsyncIterable<CompletionResponse>>
Non-streaming:
complete(params: LLMCompletionParamsNonStreaming): Promise<CompletionResponse>
exec
method
Execute LLM with tool calling and structured output support
exec<Z extends ZodSchema>(
  params: LLMChatParamsNonStreaming<AdditionalChatOptions, AdditionalMessageOptions, Z>
): Promise<ExecResponse<AdditionalMessageOptions, ZodInfer<Z>>>

Types

ChatMessage

type ChatMessage<AdditionalMessageOptions extends object = object> = {
  content: MessageContent;
  role: MessageType;
  options?: AdditionalMessageOptions;
};

ChatResponse

interface ChatResponse<AdditionalMessageOptions extends object = object> {
  message: ChatMessage<AdditionalMessageOptions>;
  raw: object | null;
}

LLMMetadata

type LLMMetadata = {
  model: string;
  temperature: number;
  topP: number;
  maxTokens?: number;
  contextWindow: number;
  tokenizer: Tokenizers | undefined;
  structuredOutput: boolean;
};

Multi-modal Support

LLMs support multi-modal content through MessageContentDetail:
type MessageContentDetail =
  | MessageContentTextDetail
  | MessageContentImageDetail
  | MessageContentAudioDetail
  | MessageContentVideoDetail
  | MessageContentFileDetail;

Example: Image Input

const response = await llm.chat({
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "What's in this image?" },
        {
          type: "image_url",
          image_url: { url: "data:image/jpeg;base64,..." }
        }
      ]
    }
  ]
});

Tool Calling

LLMs support function calling through the tools parameter:
import { tool } from "@llamaindex/core/tools";
import { z } from "zod";

const weatherTool = tool({
  name: "get_weather",
  description: "Get the weather for a location",
  parameters: z.object({
    location: z.string().describe("City name")
  }),
  execute: async ({ location }) => {
    return `Weather in ${location}: 72°F and sunny`;
  }
});

const response = await llm.chat({
  messages: [{ role: "user", content: "What's the weather in SF?" }],
  tools: [weatherTool]
});

Structured Output

Use Zod schemas for type-safe structured output:
import { z } from "zod";

const schema = z.object({
  name: z.string(),
  age: z.number(),
  email: z.string().email()
});

const result = await llm.exec({
  messages: [{ role: "user", content: "Extract info: John is 30, [email protected]" }],
  responseFormat: schema
});

console.log(result.object); // { name: "John", age: 30, email: "[email protected]" }

Build docs developers (and LLMs) love