Skip to main content

Overview

ADK-TS provides a sophisticated flow system for controlling how agents process LLM requests and responses. The BaseLlmFlow class orchestrates the multi-step interaction loop, while processors allow you to inject custom logic at key points in the request/response lifecycle.

BaseLlmFlow

The BaseLlmFlow class is the foundation of agent execution, managing the complete lifecycle from request preprocessing to response handling.

Flow Architecture

import { BaseLlmFlow } from '@iqai/adk';

// The flow handles:
// 1. Request preprocessing
// 2. LLM invocation
// 3. Response postprocessing
// 4. Function call handling
// 5. Agent transfers
Source: packages/adk/src/flows/llm-flows/base-llm-flow.ts:36

Execution Loop

The flow runs in an iterative loop until reaching a final response:
async *runAsync(invocationContext: InvocationContext): AsyncGenerator<Event> {
  let stepCount = 0;
  while (true) {
    stepCount++;
    for await (const event of this._runOneStepAsync(invocationContext)) {
      yield event;
    }
    
    if (lastEvent.isFinalResponse()) {
      break; // Exit when conversation is complete
    }
  }
}
Each step consists of:
  • Preprocessing: Prepare LLM request with tools and context
  • LLM Call: Execute model with tracing
  • Postprocessing: Handle responses and function calls
The flow automatically handles tool execution, agent transfers, and multi-turn conversations until reaching a final response.

Request Processors

Request processors intercept and modify LLM requests before they’re sent to the model.

BaseLlmRequestProcessor

import { BaseLlmRequestProcessor } from '@iqai/adk';
import type { InvocationContext, LlmRequest, Event } from '@iqai/adk';

export class CustomRequestProcessor extends BaseLlmRequestProcessor {
  async *runAsync(
    invocationContext: InvocationContext,
    llmRequest: LlmRequest,
  ): AsyncGenerator<Event> {
    // Modify the request before it reaches the LLM
    llmRequest.config = llmRequest.config || {};
    llmRequest.config.temperature = 0.7;
    
    // Optionally yield events
    // yield someEvent;
  }
}
Source: packages/adk/src/flows/llm-flows/base-llm-processor.ts:9

Context Cache Processor Example

The built-in ContextCacheRequestProcessor demonstrates advanced request processing:
import { ContextCacheRequestProcessor } from '@iqai/adk';

// This processor:
// - Finds previous cache metadata from session events
// - Applies cache configuration to requests
// - Tracks token counts for optimization

const processor = new ContextCacheRequestProcessor();
Use Cases:
  • Adding authentication headers
  • Injecting context from memory
  • Applying rate limiting
  • Modifying model parameters
  • Cache management
Source: packages/adk/src/flows/llm-flows/context-cache-processor.ts:8

Response Processors

Response processors intercept and transform LLM responses after generation.

BaseLlmResponseProcessor

import { BaseLlmResponseProcessor } from '@iqai/adk';
import type { InvocationContext, LlmResponse, Event } from '@iqai/adk';

export class CustomResponseProcessor extends BaseLlmResponseProcessor {
  async *runAsync(
    invocationContext: InvocationContext,
    llmResponse: LlmResponse,
  ): AsyncGenerator<Event> {
    // Process the response
    if (llmResponse.content) {
      // Filter sensitive information
      // Add metadata
      // Transform content
    }
    
    // Yield custom events if needed
  }
}
Source: packages/adk/src/flows/llm-flows/base-llm-processor.ts:25

Use Cases

Filter or redact sensitive information from responses before they reach users.
class ContentFilterProcessor extends BaseLlmResponseProcessor {
  async *runAsync(ctx, response) {
    if (response.content?.parts) {
      for (const part of response.content.parts) {
        if (part.text) {
          part.text = this.redactSensitiveData(part.text);
        }
      }
    }
  }
}
Validate responses against schemas or business rules.
class ValidationProcessor extends BaseLlmResponseProcessor {
  async *runAsync(ctx, response) {
    const isValid = await this.validateResponse(response);
    if (!isValid) {
      // Retry or handle invalid response
      throw new Error('Invalid response format');
    }
  }
}
Collect custom metrics about response quality.
class MetricsProcessor extends BaseLlmResponseProcessor {
  async *runAsync(ctx, response) {
    await this.recordMetrics({
      responseLength: response.text?.length,
      hasFunctionCalls: response.content?.parts?.some(p => p.functionCall),
      tokensUsed: response.usageMetadata?.totalTokenCount,
    });
  }
}

Registering Processors

Attach processors to your flow implementation:
import { SingleFlow } from '@iqai/adk';

const flow = new SingleFlow();

// Add request processors
flow.requestProcessors.push(
  new CustomRequestProcessor(),
  new AuthRequestProcessor(),
  new CacheRequestProcessor(),
);

// Add response processors
flow.responseProcessors.push(
  new ValidationProcessor(),
  new MetricsProcessor(),
  new ContentFilterProcessor(),
);

// Use with agent
const agent = new AgentBuilder()
  .withName('MyAgent')
  .withModel('gpt-4')
  .withFlow(flow)
  .buildLlm();
Processors run in the order they’re added. Request processors execute before the LLM call, while response processors execute after. Be mindful of dependencies between processors.

Processing Pipeline

The complete processing pipeline:

Advanced Patterns

Conditional Processing

class ConditionalProcessor extends BaseLlmRequestProcessor {
  async *runAsync(ctx, request) {
    // Only process for specific agents
    if (ctx.agent.name === 'AnalyticsAgent') {
      request.config.temperature = 0.1; // More deterministic
    }
    
    // Only modify in production
    if (process.env.NODE_ENV === 'production') {
      request.config.maxOutputTokens = 2000;
    }
  }
}

Stateful Processing

class RateLimitProcessor extends BaseLlmRequestProcessor {
  private requestCounts = new Map<string, number>();
  
  async *runAsync(ctx, request) {
    const userId = ctx.userId || 'anonymous';
    const count = this.requestCounts.get(userId) || 0;
    
    if (count >= 100) {
      throw new Error('Rate limit exceeded');
    }
    
    this.requestCounts.set(userId, count + 1);
  }
}

Event Generation

class AuditProcessor extends BaseLlmResponseProcessor {
  async *runAsync(ctx, response) {
    // Yield custom audit event
    const auditEvent = new Event({
      author: 'system',
      invocationId: ctx.invocationId,
      content: {
        parts: [{
          text: `Audit: Response generated with ${response.usageMetadata?.totalTokenCount} tokens`
        }]
      }
    });
    
    yield auditEvent;
  }
}

Built-in Processors

ADK-TS includes several built-in processors:
ProcessorTypePurpose
ContextCacheRequestProcessorRequestManages context caching across invocations
AuthRequestProcessorRequestHandles authentication for tool calls
OutputSchemaProcessorRequestEnforces structured output schemas
Check packages/adk/src/flows/llm-flows/ for all built-in processor implementations.

Best Practices

  1. Keep Processors Focused: Each processor should handle one concern
  2. Avoid Heavy Computation: Processors run on every request/response
  3. Handle Errors Gracefully: Don’t let processor failures break the flow
  4. Use Events Sparingly: Only yield events when necessary
  5. Test Processors Independently: Unit test processor logic separately

Next Steps

Planning

Learn about planning strategies for agentic workflows

Code Execution

Explore safe code execution environments

Build docs developers (and LLMs) love