Skip to main content

Overview

The BaseLlm abstract class provides the foundation for all LLM implementations in ADK-TS. It defines the core interface for generating content, managing streaming responses, and handling live connections.

Class Definition

abstract class BaseLlm {
  model: string;
  protected logger: Logger;

  constructor(model: string);
  static supportedModels(): string[];
  generateContentAsync(llmRequest: LlmRequest, stream?: boolean): AsyncGenerator<LlmResponse, void, unknown>;
  protected abstract generateContentAsyncImpl(llmRequest: LlmRequest, stream?: boolean): AsyncGenerator<LlmResponse, void, unknown>;
  protected maybeAppendUserContent(llmRequest: LlmRequest): void;
  connect(llmRequest: LlmRequest): BaseLLMConnection;
}

Properties

model
string
required
The name of the LLM model, e.g., gemini-2.5-flash or gpt-4.
logger
Logger
Protected logger instance for debugging and telemetry. Automatically initialized with name "BaseLlm".

Constructor

model
string
required
The model identifier string used to instantiate the LLM.
const llm = new MyLlm("gpt-4");

Static Methods

supportedModels()

Returns a list of regex patterns for model names that this LLM class supports. Used by LlmRegistry for automatic model resolution. Returns: string[] - Array of regex pattern strings Default implementation: Returns empty array []
static supportedModels(): string[] {
  return ["^gpt-.*", "^o1-.*"];
}

Methods

generateContentAsync()

Generates content from the LLM based on the provided request. Handles both streaming and non-streaming modes with automatic telemetry and error tracking.
llmRequest
LlmRequest
required
The request object containing contents, tools, and configuration.
stream
boolean
default:"false"
Whether to use streaming mode. When true, yields multiple responses as they arrive.
Returns: AsyncGenerator<LlmResponse, void, unknown>
LlmResponse
LlmResponse
For non-streaming calls, yields one complete response. For streaming calls, yields multiple partial responses that should be merged.
// Non-streaming
for await (const response of llm.generateContentAsync(request)) {
  console.log(response.text);
}

// Streaming
for await (const chunk of llm.generateContentAsync(request, true)) {
  process.stdout.write(chunk.text || "");
}
Telemetry Features:
  • Tracks token usage (input, output, total)
  • Measures time-to-first-token for streaming
  • Records chunk count and timing
  • Emits OpenTelemetry events for prompts and completions
  • Captures finish reasons and error states

generateContentAsyncImpl()

Abstract method - Must be implemented by subclasses to provide the actual LLM API integration.
llmRequest
LlmRequest
required
The request object to process.
stream
boolean
Whether streaming is enabled.
Returns: AsyncGenerator<LlmResponse, void, unknown>
protected async *generateContentAsyncImpl(
  llmRequest: LlmRequest,
  stream?: boolean
): AsyncGenerator<LlmResponse, void, unknown> {
  // Implementation-specific logic here
  yield new LlmResponse({ text: "Hello" });
}

maybeAppendUserContent()

Protected method that ensures proper conversation structure by appending user content when necessary.
llmRequest
LlmRequest
required
The request to potentially modify.
Behavior:
  • If no contents exist, adds a user message prompting the model to follow system instructions
  • If the last message isn’t from the user, appends a continuation prompt
  • Prevents empty model responses and maintains conversation flow
protected maybeAppendUserContent(llmRequest: LlmRequest): void {
  // Modifies llmRequest.contents in place
}

connect()

Creates a live bidirectional connection to the LLM for real-time interactions.
llmRequest
LlmRequest
required
The initial request configuration for the connection.
Returns: BaseLLMConnection Default implementation: Throws an error indicating live connections are not supported.
try {
  const connection = llm.connect(request);
  await connection.sendContent(content);
} catch (error) {
  // Most LLMs don't support live connections
}

Implementation Example

import { BaseLlm } from "@iqai/adk";
import type { LlmRequest, LlmResponse } from "@iqai/adk";

export class MyCustomLlm extends BaseLlm {
  static supportedModels(): string[] {
    return ["^my-model-.*"];
  }

  protected async *generateContentAsyncImpl(
    llmRequest: LlmRequest,
    stream?: boolean
  ): AsyncGenerator<LlmResponse, void, unknown> {
    // Call your LLM API
    const response = await fetch("https://api.example.com/generate", {
      method: "POST",
      body: JSON.stringify({
        model: this.model,
        messages: llmRequest.contents,
        stream: stream || false,
      }),
    });

    if (stream) {
      // Handle streaming response
      for await (const chunk of response.body) {
        yield new LlmResponse({ text: chunk.toString() });
      }
    } else {
      // Handle non-streaming response
      const data = await response.json();
      yield new LlmResponse({
        text: data.content,
        usageMetadata: {
          promptTokenCount: data.usage.input_tokens,
          candidatesTokenCount: data.usage.output_tokens,
          totalTokenCount: data.usage.total_tokens,
        },
      });
    }
  }
}

Error Handling

The generateContentAsync method automatically:
  • Catches and logs errors with telemetry
  • Records error metrics for monitoring
  • Rethrows exceptions for caller handling
try {
  for await (const response of llm.generateContentAsync(request)) {
    // Process response
  }
} catch (error) {
  // Error has already been logged with telemetry
  console.error("LLM call failed:", error.message);
}

Source Reference

See implementation: /packages/adk/src/models/base-llm.ts

Build docs developers (and LLMs) love