Skip to main content

Overview

The LlmRequest class encapsulates all parameters needed to make a request to an LLM, including conversation contents, tools, output schemas, system instructions, and generation configuration.

Class Definition

class LlmRequest {
  model?: string;
  contents: Content[];
  config?: GenerateContentConfig;
  cacheConfig?: ContextCacheConfig;
  cacheMetadata?: CacheMetadata;
  cacheableContentsTokenCount?: number;
  liveConnectConfig: LiveConnectConfig;
  toolsDict: Record<string, BaseTool>;

  constructor(data?: Partial<LlmRequest>);
  appendInstructions(instructions: string[]): void;
  appendTools(tools: BaseTool[]): void;
  setOutputSchema(baseModel: any): void;
  getSystemInstructionText(): string | undefined;
  static extractTextFromContent(content: any): string;
}

Properties

model
string
The model name identifier. Optional as it may be set by the LLM instance.Example: "gpt-4", "gemini-2.5-flash"
contents
Content[]
required
Array of conversation messages to send to the model. Each Content object contains a role and parts.
contents: [
  { role: "user", parts: [{ text: "Hello!" }] },
  { role: "model", parts: [{ text: "Hi there!" }] },
]
config
GenerateContentConfig
Additional configuration for content generation. Should NOT contain tools directly (use toolsDict instead).
cacheConfig
ContextCacheConfig
Configuration for context caching to reduce latency and costs.
cacheMetadata
CacheMetadata
Metadata from previous requests for cache management.
cacheableContentsTokenCount
number
Token count from previous prompt, used for cache size validation.
liveConnectConfig
LiveConnectConfig
Configuration for live bidirectional connections.
toolsDict
Record<string, BaseTool>
required
Dictionary mapping tool names to BaseTool instances for execution.
toolsDict: {
  "search_web": searchTool,
  "get_weather": weatherTool,
}

Constructor

data
Partial<LlmRequest>
Optional initialization data for all properties.
const request = new LlmRequest({
  model: "gpt-4",
  contents: [{ role: "user", parts: [{ text: "Hello" }] }],
  config: {
    temperature: 0.7,
    maxOutputTokens: 1000,
  },
});

Methods

appendInstructions()

Appends additional instructions to the system instruction, creating or extending the existing system instruction text.
instructions
string[]
required
Array of instruction strings to append.
request.appendInstructions([
  "Always respond in JSON format.",
  "Be concise and accurate.",
]);
Behavior:
  • Creates config object if it doesn’t exist
  • Joins multiple instructions with double newlines (\n\n)
  • Appends to existing system instruction or creates new one

appendTools()

Appends tools to the request, converting them to function declarations and adding them to the tools dictionary.
tools
BaseTool[]
required
Array of tool instances to add.
import { SearchTool, CalculatorTool } from "@iqai/adk";

request.appendTools([new SearchTool(), new CalculatorTool()]);
Behavior:
  • Calls getDeclaration() on each tool
  • Adds declarations to config.tools array
  • Populates toolsDict for tool execution
  • Skips tools without valid declarations

setOutputSchema()

Configures the request to return structured JSON output matching a schema.
baseModel
any
required
The JSON schema or Zod schema defining the expected output structure.
import { z } from "zod";

const schema = z.object({
  name: z.string(),
  age: z.number(),
  email: z.string().email(),
});

request.setOutputSchema(schema);
Effects:
  • Sets config.responseSchema to the provided schema
  • Sets config.responseMimeType to "application/json"
  • Ensures model returns parseable JSON

getSystemInstructionText()

Extracts the system instruction as plain text, handling both string and Content type system instructions. Returns: string | undefined
const instructions = request.getSystemInstructionText();
if (instructions) {
  console.log("System instructions:", instructions);
}
Behavior:
  • Returns undefined if no system instruction exists
  • Returns string directly if system instruction is a string
  • Extracts and concatenates text from Content parts
  • Falls back to string conversion for other types

extractTextFromContent() (static)

Static utility method to extract text content from various content formats.
content
any
required
The content to extract text from (string, array, or Content object).
Returns: string
const text = LlmRequest.extractTextFromContent({
  role: "user",
  parts: [{ text: "Hello" }, { text: " world" }],
});
console.log(text); // "Hello world"
Handles:
  • Strings: Returns as-is
  • Arrays: Concatenates text from all parts
  • Content objects: Extracts from parts property
  • Other types: Converts to string

Usage Examples

Basic Request

import { LlmRequest } from "@iqai/adk";

const request = new LlmRequest({
  contents: [
    {
      role: "user",
      parts: [{ text: "Explain quantum computing" }],
    },
  ],
  config: {
    temperature: 0.7,
    maxOutputTokens: 500,
  },
});

Request with Tools

import { LlmRequest, BaseTool } from "@iqai/adk";
import { z } from "zod";

class WeatherTool extends BaseTool {
  name = "get_weather";
  description = "Get current weather for a location";
  inputSchema = z.object({
    location: z.string(),
  });

  async execute(input: any) {
    return { temp: 72, condition: "sunny" };
  }
}

const request = new LlmRequest();
request.appendTools([new WeatherTool()]);
request.contents = [
  {
    role: "user",
    parts: [{ text: "What's the weather in London?" }],
  },
];

Structured Output Request

import { LlmRequest } from "@iqai/adk";
import { z } from "zod";

const request = new LlmRequest({
  contents: [
    {
      role: "user",
      parts: [{ text: "Extract contact info from this email: ..." }],
    },
  ],
});

const contactSchema = z.object({
  name: z.string(),
  email: z.string().email(),
  phone: z.string().optional(),
});

request.setOutputSchema(contactSchema);
request.appendInstructions(["Extract all contact information accurately."]);

Multi-turn Conversation

const request = new LlmRequest({
  contents: [
    { role: "user", parts: [{ text: "Hello!" }] },
    { role: "model", parts: [{ text: "Hi! How can I help?" }] },
    { role: "user", parts: [{ text: "Tell me about TypeScript" }] },
  ],
  config: {
    systemInstruction: "You are a helpful programming assistant.",
  },
});
  • BaseLlm - Base class that processes LlmRequest
  • LlmResponse - Response structure from LLMs
  • BaseTool - Tool interface for function calling

Type Imports

import type {
  Content,
  GenerateContentConfig,
  LiveConnectConfig,
} from "@google/genai";
import type { ContextCacheConfig } from "@adk/agents";
import type { CacheMetadata } from "@adk/models";
import type { BaseTool } from "@adk/tools";

Source Reference

See implementation: /packages/adk/src/models/llm-request.ts

Build docs developers (and LLMs) love