Skip to main content

Flows

Flows orchestrate the complex request-response lifecycle in ADK-TS, managing the sequence of operations from user input through LLM processing to final response generation. The flow system provides a modular, extensible pipeline for preprocessing, LLM invocation, and postprocessing.

Flow Architecture

BaseLlmFlow

All flows extend from BaseLlmFlow, which defines the core lifecycle:
import { BaseLlmFlow, InvocationContext, Event } from "@iqai/adk";

export abstract class BaseLlmFlow {
  /** Request processors (run before LLM call) */
  requestProcessors: Array<RequestProcessor>;
  
  /** Response processors (run after LLM call) */
  responseProcessors: Array<ResponseProcessor>;

  /**
   * Main execution method
   */
  async *runAsync(
    invocationContext: InvocationContext,
  ): AsyncGenerator<Event> {
    // Loop until final response
    while (true) {
      // Run one step
      yield* this._runOneStepAsync(invocationContext);
      
      // Check if done
      if (lastEvent.isFinalResponse()) {
        break;
      }
    }
  }

  /**
   * Run a single processing step
   */
  async *_runOneStepAsync(
    invocationContext: InvocationContext,
  ): AsyncGenerator<Event> {
    const llmRequest = new LlmRequest();

    // 1. Preprocessing
    yield* this._preprocessAsync(invocationContext, llmRequest);

    // 2. LLM call
    const response = await this.llm.generateContentAsync(llmRequest);

    // 3. Postprocessing
    yield* this._postprocessAsync(invocationContext, response);
  }
}

Flow Types

SingleFlow

Handles single-agent scenarios:
import { SingleFlow } from "@iqai/adk";

// Used automatically when agent has no sub-agents
const agent = new LlmAgent({
  name: "simple_agent",
  model: "gpt-4o",
  // Uses SingleFlow internally
});

AutoFlow

Manages multi-agent coordination and transfers:
import { AutoFlow } from "@iqai/adk";

// Used automatically when agent has sub-agents
const agent = new LlmAgent({
  name: "coordinator",
  model: "gpt-4o",
  subAgents: [agent1, agent2],
  // Uses AutoFlow internally for agent transfers
});

Request Processors

Request processors build up the LlmRequest before sending to the model:
import { BaseLlmRequestProcessor, InvocationContext, LlmRequest, Event } from "@iqai/adk";

class CustomRequestProcessor extends BaseLlmRequestProcessor {
  async *runAsync(
    invocationContext: InvocationContext,
    llmRequest: LlmRequest,
  ): AsyncGenerator<Event, void, unknown> {
    // Add custom system instructions
    llmRequest.appendInstructions([
      `Current time: ${new Date().toISOString()}`,
      `User timezone: ${invocationContext.state.get("timezone", "UTC")}`,
    ]);

    // Modify tools based on context
    const tools = this.getContextualTools(invocationContext);
    llmRequest.appendTools(tools);

    // Add custom content
    llmRequest.contents.push({
      role: "user",
      parts: [{ text: "Additional context..." }],
    });

    // Most processors don't yield events
  }
}

Built-in Request Processors

  1. BasicProcessor - Sets up base request structure
  2. InstructionsProcessor - Adds system instructions
  3. IdentityProcessor - Injects agent identity information
  4. ContentsProcessor - Adds conversation history
  5. NaturalLanguagePlanningProcessor - Adds planning prompts
  6. CodeExecutionProcessor - Prepares code execution context

Response Processors

Response processors handle LLM output and generate events:
import { BaseLlmResponseProcessor, InvocationContext, LlmResponse, Event } from "@iqai/adk";

class CustomResponseProcessor extends BaseLlmResponseProcessor {
  async *runAsync(
    invocationContext: InvocationContext,
    llmResponse: LlmResponse,
  ): AsyncGenerator<Event, void, unknown> {
    // Extract and process response
    const text = llmResponse.text;

    // Check for special patterns
    if (text?.includes("[ACTION:")) {
      const action = this.extractAction(text);
      yield new Event({
        author: invocationContext.agent.name,
        content: { parts: [{ text: `Executing ${action}` }] },
      });
    }

    // Transform response
    const processedResponse = this.transform(llmResponse);
    
    yield new Event({
      author: invocationContext.agent.name,
      content: processedResponse.content,
    });
  }
}

Built-in Response Processors

  1. FunctionsProcessor - Handles tool/function calls
  2. AgentTransferProcessor - Manages sub-agent transfers
  3. NaturalLanguagePlanningProcessor - Processes planning responses
  4. CodeExecutionProcessor - Handles code execution results

InvocationContext

The context object passed through the flow pipeline:
interface InvocationContext {
  /** Unique invocation ID */
  invocationId: string;
  
  /** Current agent */
  agent: BaseAgent;
  
  /** Session being processed */
  session: Session;
  
  /** Session state */
  state: State;
  
  /** Memory service (if configured) */
  memoryService?: MemoryService;
  
  /** Artifact service (if configured) */
  artifactService?: BaseArtifactService;
  
  /** Branch for multi-agent isolation */
  branch: string;
  
  /** Flag to end invocation early */
  endInvocation?: boolean;
  
  /** Cost tracking */
  totalCost?: number;
}

Processing Pipeline

The complete flow through a single step:

Custom Flow Example

Create a specialized flow for specific use cases:
import { BaseLlmFlow, InvocationContext, Event, LlmRequest } from "@iqai/adk";

class ValidationFlow extends BaseLlmFlow {
  async *runAsync(invocationContext: InvocationContext): AsyncGenerator<Event> {
    const llmRequest = new LlmRequest();

    // Custom preprocessing
    llmRequest.appendInstructions([
      "Validate user input before processing",
      "Check for malicious content",
      "Verify data format",
    ]);

    // Add validation tools
    llmRequest.appendTools([
      this.validationTool,
      this.sanitizationTool,
    ]);

    // Run validation
    yield new Event({
      author: invocationContext.agent.name,
      content: { parts: [{ text: "Validating input..." }] },
    });

    // Call LLM
    const responses = invocationContext.agent.llm.generateContentAsync(
      llmRequest,
      true
    );

    for await (const response of responses) {
      // Custom response handling
      if (this.isValid(response)) {
        yield* this.processValidResponse(response, invocationContext);
      } else {
        yield* this.handleInvalidResponse(response, invocationContext);
      }
    }
  }
}

Streaming Events

Flows yield events as they’re generated:
import { Runner } from "@iqai/adk";

const runner = new Runner({
  appName: "my-app",
  agent,
  sessionService,
});

// Events stream in real-time
for await (const event of runner.runAsync({
  userId: "user-123",
  sessionId: session.id,
  newMessage: { parts: [{ text: "Hello!" }] },
})) {
  // Event from preprocessing
  if (event.author === "system") {
    console.log("System:", event.text);
  }
  
  // Event from LLM
  if (event.author === agent.name) {
    console.log("Agent:", event.text);
  }
  
  // Event from function call
  if (event.getFunctionCalls().length > 0) {
    console.log("Calling tools:", event.getFunctionCalls());
  }
}

Flow Control

Early Termination

class ConditionalFlow extends BaseLlmFlow {
  async *_preprocessAsync(
    invocationContext: InvocationContext,
    llmRequest: LlmRequest,
  ): AsyncGenerator<Event> {
    // Check condition
    if (invocationContext.state.get("bypass_llm")) {
      // Set flag to skip LLM call
      invocationContext.endInvocation = true;
      
      yield new Event({
        author: invocationContext.agent.name,
        content: { parts: [{ text: "Using cached response" }] },
      });
      return;
    }

    // Continue normal processing
    yield* super._preprocessAsync(invocationContext, llmRequest);
  }
}

Multi-Step Processing

Flows automatically loop until a final response:
// Step 1: Agent calls search_tool
// Step 2: search_tool returns results
// Step 3: Agent synthesizes final answer
// Flow stops when isFinalResponse() returns true

Function Call Flow

When the LLM calls a function:

Best Practices

  1. Processor Order: Request processors run sequentially; order matters for dependencies.
  2. Event Yielding: Yield events for user feedback during long operations.
  3. Error Handling: Wrap processor logic in try-catch to prevent pipeline failures.
  4. Context Preservation: Don’t mutate InvocationContext; use it read-only except for specific flags.
  5. Streaming: Design processors to work with partial responses during streaming.
  6. Performance: Keep processors lightweight; heavy operations should be in tools.

Extending Flows

Register custom processors:
import { LlmAgent } from "@iqai/adk";

class MyAgent extends LlmAgent {
  protected override createFlow(): BaseLlmFlow {
    const flow = super.createFlow();
    
    // Add custom request processor
    flow.requestProcessors.push(new CustomRequestProcessor());
    
    // Add custom response processor
    flow.responseProcessors.push(new CustomResponseProcessor());
    
    return flow;
  }
}
  • Agents - Agents execute flows
  • Models - LLMs called within flows
  • Tools - Tools invoked by response processors
  • Sessions - Sessions flow through the pipeline

Build docs developers (and LLMs) love