Overview
The LlmRequest class encapsulates all parameters needed to make a request to an LLM, including conversation contents, tools, output schemas, system instructions, and generation configuration.
Class Definition
class LlmRequest {
model ?: string ;
contents : Content [];
config ?: GenerateContentConfig ;
cacheConfig ?: ContextCacheConfig ;
cacheMetadata ?: CacheMetadata ;
cacheableContentsTokenCount ?: number ;
liveConnectConfig : LiveConnectConfig ;
toolsDict : Record < string , BaseTool >;
constructor ( data ?: Partial < LlmRequest >);
appendInstructions ( instructions : string []) : void ;
appendTools ( tools : BaseTool []) : void ;
setOutputSchema ( baseModel : any ) : void ;
getSystemInstructionText () : string | undefined ;
static extractTextFromContent ( content : any ) : string ;
}
Properties
The model name identifier. Optional as it may be set by the LLM instance. Example: "gpt-4", "gemini-2.5-flash"
Array of conversation messages to send to the model. Each Content object contains a role and parts. contents : [
{ role: "user" , parts: [{ text: "Hello!" }] },
{ role: "model" , parts: [{ text: "Hi there!" }] },
]
Additional configuration for content generation. Should NOT contain tools directly (use toolsDict instead). Show GenerateContentConfig Properties
System-level instructions for the model’s behavior.
Controls randomness (0.0 to 2.0). Lower values are more deterministic.
Maximum number of tokens to generate.
Nucleus sampling parameter (0.0 to 1.0).
Top-K sampling parameter.
JSON schema for structured output.
MIME type for the response (e.g., "application/json").
Tool declarations (managed by appendTools()).
Configuration for context caching to reduce latency and costs.
Metadata from previous requests for cache management.
cacheableContentsTokenCount
Token count from previous prompt, used for cache size validation.
Configuration for live bidirectional connections.
toolsDict
Record<string, BaseTool>
required
Dictionary mapping tool names to BaseTool instances for execution. toolsDict : {
"search_web" : searchTool ,
"get_weather" : weatherTool ,
}
Constructor
Optional initialization data for all properties.
const request = new LlmRequest ({
model: "gpt-4" ,
contents: [{ role: "user" , parts: [{ text: "Hello" }] }],
config: {
temperature: 0.7 ,
maxOutputTokens: 1000 ,
},
});
Methods
appendInstructions()
Appends additional instructions to the system instruction, creating or extending the existing system instruction text.
Array of instruction strings to append.
request . appendInstructions ([
"Always respond in JSON format." ,
"Be concise and accurate." ,
]);
Behavior:
Creates config object if it doesn’t exist
Joins multiple instructions with double newlines (\n\n)
Appends to existing system instruction or creates new one
Appends tools to the request, converting them to function declarations and adding them to the tools dictionary.
Array of tool instances to add.
import { SearchTool , CalculatorTool } from "@iqai/adk" ;
request . appendTools ([ new SearchTool (), new CalculatorTool ()]);
Behavior:
Calls getDeclaration() on each tool
Adds declarations to config.tools array
Populates toolsDict for tool execution
Skips tools without valid declarations
setOutputSchema()
Configures the request to return structured JSON output matching a schema.
The JSON schema or Zod schema defining the expected output structure.
import { z } from "zod" ;
const schema = z . object ({
name: z . string (),
age: z . number (),
email: z . string (). email (),
});
request . setOutputSchema ( schema );
Effects:
Sets config.responseSchema to the provided schema
Sets config.responseMimeType to "application/json"
Ensures model returns parseable JSON
getSystemInstructionText()
Extracts the system instruction as plain text, handling both string and Content type system instructions.
Returns: string | undefined
const instructions = request . getSystemInstructionText ();
if ( instructions ) {
console . log ( "System instructions:" , instructions );
}
Behavior:
Returns undefined if no system instruction exists
Returns string directly if system instruction is a string
Extracts and concatenates text from Content parts
Falls back to string conversion for other types
Static utility method to extract text content from various content formats.
The content to extract text from (string, array, or Content object).
Returns: string
const text = LlmRequest . extractTextFromContent ({
role: "user" ,
parts: [{ text: "Hello" }, { text: " world" }],
});
console . log ( text ); // "Hello world"
Handles:
Strings: Returns as-is
Arrays: Concatenates text from all parts
Content objects: Extracts from parts property
Other types: Converts to string
Usage Examples
Basic Request
import { LlmRequest } from "@iqai/adk" ;
const request = new LlmRequest ({
contents: [
{
role: "user" ,
parts: [{ text: "Explain quantum computing" }],
},
],
config: {
temperature: 0.7 ,
maxOutputTokens: 500 ,
},
});
import { LlmRequest , BaseTool } from "@iqai/adk" ;
import { z } from "zod" ;
class WeatherTool extends BaseTool {
name = "get_weather" ;
description = "Get current weather for a location" ;
inputSchema = z . object ({
location: z . string (),
});
async execute ( input : any ) {
return { temp: 72 , condition: "sunny" };
}
}
const request = new LlmRequest ();
request . appendTools ([ new WeatherTool ()]);
request . contents = [
{
role: "user" ,
parts: [{ text: "What's the weather in London?" }],
},
];
Structured Output Request
import { LlmRequest } from "@iqai/adk" ;
import { z } from "zod" ;
const request = new LlmRequest ({
contents: [
{
role: "user" ,
parts: [{ text: "Extract contact info from this email: ..." }],
},
],
});
const contactSchema = z . object ({
name: z . string (),
email: z . string (). email (),
phone: z . string (). optional (),
});
request . setOutputSchema ( contactSchema );
request . appendInstructions ([ "Extract all contact information accurately." ]);
Multi-turn Conversation
const request = new LlmRequest ({
contents: [
{ role: "user" , parts: [{ text: "Hello!" }] },
{ role: "model" , parts: [{ text: "Hi! How can I help?" }] },
{ role: "user" , parts: [{ text: "Tell me about TypeScript" }] },
],
config: {
systemInstruction: "You are a helpful programming assistant." ,
},
});
BaseLlm - Base class that processes LlmRequest
LlmResponse - Response structure from LLMs
BaseTool - Tool interface for function calling
Type Imports
import type {
Content ,
GenerateContentConfig ,
LiveConnectConfig ,
} from "@google/genai" ;
import type { ContextCacheConfig } from "@adk/agents" ;
import type { CacheMetadata } from "@adk/models" ;
import type { BaseTool } from "@adk/tools" ;
Source Reference
See implementation: /packages/adk/src/models/llm-request.ts