Skip to main content

Agents

Agents are the core building blocks of ADK-TS. They encapsulate the logic for processing user input, making decisions, calling tools, and generating responses using Large Language Models (LLMs).

Agent Hierarchy

ADK-TS uses a hierarchical agent architecture where agents can contain sub-agents, enabling complex multi-agent systems:

BaseAgent

All agents extend from BaseAgent, which provides the foundation for agent lifecycle management, sub-agent hierarchy, and callback systems.
import type { Content } from "@google/genai";
import { BaseAgent } from "@iqai/adk";

export abstract class BaseAgent {
  /**
   * The agent's name.
   * Must be a valid identifier and unique within the agent tree.
   * Cannot be "user" (reserved for end-user input).
   */
  name: string;

  /**
   * Description of the agent's capability.
   * Used by models to determine whether to delegate control.
   */
  description: string;

  /**
   * The parent agent of this agent.
   */
  parentAgent?: BaseAgent;

  /**
   * The sub-agents of this agent.
   */
  subAgents: BaseAgent[];

  /**
   * Callback invoked before the agent runs.
   */
  beforeAgentCallback?: BeforeAgentCallback;

  /**
   * Callback invoked after the agent runs.
   */
  afterAgentCallback?: AfterAgentCallback;
}
An agent can only be added as a sub-agent once. To reuse agent logic, create multiple instances with different names.

LlmAgent

LlmAgent is the primary implementation most developers will use. It extends BaseAgent with LLM-specific capabilities:
import { LlmAgent } from "@iqai/adk";

const agent = new LlmAgent({
  name: "customer_support",
  model: "gpt-4o",
  description: "Handles customer support inquiries",
  instruction: "You are a friendly customer support agent. Be helpful and concise.",
  tools: [searchKnowledgeBase, createTicket],
  subAgents: [billingAgent, technicalAgent],
});

Key Features

LlmAgent seamlessly integrates with tools, automatically handling function calling:
const agent = new LlmAgent({
  name: "assistant",
  model: "gemini-2.5-flash",
  tools: [
    new HttpRequestTool(),
    new FileOperationsTool(),
    customTool,
  ],
});
Enable agents to maintain persistent knowledge across conversations:
import { MemoryService, InMemoryStorageProvider } from "@iqai/adk";

const memoryService = new MemoryService({
  storage: new InMemoryStorageProvider(),
});

const agent = new LlmAgent({
  name: "smart_assistant",
  model: "gpt-4o",
  memoryService, // Automatically retrieves relevant memories
});
Track conversation state and history:
const sessionService = new InMemorySessionService();

const agent = new LlmAgent({
  name: "chatbot",
  model: "gemini-2.5-flash",
});

// Sessions are managed by the Runner
const session = await sessionService.createSession("my-app", "user-123");

Agent Types

ADK-TS provides multiple specialized agent types for different use cases:

LoopAgent

Executes an agent iteratively with planning capabilities:
import { LoopAgent } from "@iqai/adk";

const loopAgent = new LoopAgent({
  name: "task_planner",
  agent: llmAgent,
  maxIterations: 5,
  planner: customPlanner,
});

ParallelAgent

Executes multiple agents concurrently:
import { ParallelAgent } from "@iqai/adk";

const parallelAgent = new ParallelAgent({
  name: "multi_search",
  subAgents: [webSearchAgent, databaseAgent, apiAgent],
  description: "Searches multiple sources simultaneously",
});

SequentialAgent

Executes agents in sequence, passing output to the next:
import { SequentialAgent } from "@iqai/adk";

const pipeline = new SequentialAgent({
  name: "data_pipeline",
  subAgents: [
    dataCollectionAgent,
    dataProcessingAgent,
    reportGenerationAgent,
  ],
});

LangGraphAgent

Graph-based workflow agent for complex control flow:
import { LangGraphAgent } from "@iqai/adk";

const graphAgent = new LangGraphAgent({
  name: "workflow",
  nodes: [
    { name: "analyze", agent: analyzeAgent },
    { name: "decide", agent: decisionAgent },
    { name: "execute", agent: executionAgent },
  ],
  rootNode: "analyze",
});

AgentBuilder (Fluent API)

The AgentBuilder provides a fluent interface for creating agents:
import { AgentBuilder } from "@iqai/adk";

// Simple usage
const response = await AgentBuilder
  .withModel("gpt-4o")
  .ask("What's the weather like?");

// Complex usage
const agent = new AgentBuilder()
  .withName("advanced_assistant")
  .withModel("gemini-2.5-flash")
  .withDescription("An advanced AI assistant")
  .withInstruction("Be helpful and accurate")
  .withTools([tool1, tool2])
  .withSubAgents([subAgent1, subAgent2])
  .buildLlm();
Use AgentBuilder for quick prototyping and simple use cases. Use direct agent constructors for production applications requiring fine-grained control.

Agent Callbacks

Agents support lifecycle callbacks for custom logic:
const agent = new LlmAgent({
  name: "monitored_agent",
  model: "gpt-4o",
  
  // Runs before agent execution
  beforeAgentCallback: async (ctx) => {
    console.log("Agent starting:", ctx.agent.name);
    
    // Return content to skip agent execution
    if (ctx.state.get("bypass")) {
      return { parts: [{ text: "Bypassed" }] };
    }
  },
  
  // Runs after agent execution
  afterAgentCallback: async (ctx) => {
    console.log("Agent finished:", ctx.agent.name);
    
    // Modify or replace response
    const response = ctx.lastResponse;
    return response;
  },
});

Multi-Agent Coordination

Agents can transfer control to sub-agents based on the conversation:
const rootAgent = new LlmAgent({
  name: "customer_service",
  model: "gpt-4o",
  description: "Main customer service coordinator",
  subAgents: [
    new LlmAgent({
      name: "billing_specialist",
      model: "gpt-4o",
      description: "Handles billing and payment questions",
    }),
    new LlmAgent({
      name: "tech_support",
      model: "gpt-4o",
      description: "Provides technical support",
    }),
  ],
});
The LLM automatically determines when to transfer to a sub-agent based on the description field. Make descriptions clear and specific.

Best Practices

  1. Unique Names: Ensure agent names are unique within your agent tree and follow identifier naming rules (alphanumeric and underscores only).
  2. Clear Descriptions: Write one-line, descriptive summaries that help the LLM decide when to delegate to sub-agents.
  3. Tool Organization: Group related tools together and assign them to the appropriate agent level.
  4. State Management: Use session state for conversation-specific data and memory services for long-term knowledge.
  5. Error Handling: Implement callbacks to handle errors gracefully and provide fallback responses.
  • Models - Learn about LLM providers and model configuration
  • Tools - Extend agent capabilities with tools
  • Sessions - Manage conversation state
  • Memory - Implement persistent knowledge

Build docs developers (and LLMs) love