ADK-TS provides a sophisticated flow system for controlling how agents process LLM requests and responses. The BaseLlmFlow class orchestrates the multi-step interaction loop, while processors allow you to inject custom logic at key points in the request/response lifecycle.
The flow runs in an iterative loop until reaching a final response:
async *runAsync(invocationContext: InvocationContext): AsyncGenerator<Event> { let stepCount = 0; while (true) { stepCount++; for await (const event of this._runOneStepAsync(invocationContext)) { yield event; } if (lastEvent.isFinalResponse()) { break; // Exit when conversation is complete } }}
Each step consists of:
Preprocessing: Prepare LLM request with tools and context
LLM Call: Execute model with tracing
Postprocessing: Handle responses and function calls
The flow automatically handles tool execution, agent transfers, and multi-turn conversations until reaching a final response.
Filter or redact sensitive information from responses before they reach users.
class ContentFilterProcessor extends BaseLlmResponseProcessor { async *runAsync(ctx, response) { if (response.content?.parts) { for (const part of response.content.parts) { if (part.text) { part.text = this.redactSensitiveData(part.text); } } } }}
Response Validation
Validate responses against schemas or business rules.
class ValidationProcessor extends BaseLlmResponseProcessor { async *runAsync(ctx, response) { const isValid = await this.validateResponse(response); if (!isValid) { // Retry or handle invalid response throw new Error('Invalid response format'); } }}
import { SingleFlow } from '@iqai/adk';const flow = new SingleFlow();// Add request processorsflow.requestProcessors.push( new CustomRequestProcessor(), new AuthRequestProcessor(), new CacheRequestProcessor(),);// Add response processorsflow.responseProcessors.push( new ValidationProcessor(), new MetricsProcessor(), new ContentFilterProcessor(),);// Use with agentconst agent = new AgentBuilder() .withName('MyAgent') .withModel('gpt-4') .withFlow(flow) .buildLlm();
Processors run in the order they’re added. Request processors execute before the LLM call, while response processors execute after. Be mindful of dependencies between processors.
class ConditionalProcessor extends BaseLlmRequestProcessor { async *runAsync(ctx, request) { // Only process for specific agents if (ctx.agent.name === 'AnalyticsAgent') { request.config.temperature = 0.1; // More deterministic } // Only modify in production if (process.env.NODE_ENV === 'production') { request.config.maxOutputTokens = 2000; } }}