Skip to main content

Overview

LangGraphAgent implements a directed graph of agents, allowing you to build sophisticated workflows with:
  • Conditional branching based on state or events
  • Loops and cyclic execution patterns
  • Multiple entry and exit points
  • Dynamic routing between agents
  • Max steps protection against infinite loops

Basic Usage

import { AgentBuilder, LlmAgent, type LangGraphNode } from '@iqai/adk';

const startAgent = new LlmAgent({
  name: 'start',
  description: 'Starts the workflow',
  instruction: 'Begin processing',
  model: 'gemini-2.5-flash'
});

const processAgent = new LlmAgent({
  name: 'process',
  description: 'Processes data',
  instruction: 'Process the data',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  {
    name: 'start',
    agent: startAgent,
    targets: ['process']
  },
  {
    name: 'process',
    agent: processAgent
  }
];

const { runner } = await AgentBuilder
  .create('simple-graph')
  .asLangGraph(nodes, 'start')
  .build();

const result = await runner.ask('Process this data');

Creating a LangGraphAgent

Using AgentBuilder

const nodes: LangGraphNode[] = [
  { name: 'node1', agent: agent1, targets: ['node2'] },
  { name: 'node2', agent: agent2, targets: ['node3'] },
  { name: 'node3', agent: agent3 }
];

const { runner } = await AgentBuilder
  .create('graph-workflow')
  .asLangGraph(nodes, 'node1') // Start at node1
  .build();

Direct Construction

import { LangGraphAgent, type LangGraphNode } from '@iqai/adk';

const workflow = new LangGraphAgent({
  name: 'complex-workflow',
  description: 'A graph-based workflow',
  nodes: nodes,
  rootNode: 'start',
  maxSteps: 50 // Prevent infinite loops
});

Configuration

interface LangGraphAgentConfig {
  // Required
  name: string;
  description: string;
  nodes: LangGraphNode[];
  rootNode: string;
  
  // Optional
  maxSteps?: number; // Default: 50
}

Node Configuration

interface LangGraphNode {
  // Required
  name: string;              // Unique identifier
  agent: BaseAgent;          // Agent to execute at this node
  
  // Optional
  targets?: string[];        // Names of nodes to consider next
  condition?: (              // Function to determine if this node should execute
    lastEvent: Event,
    context: InvocationContext
  ) => boolean | Promise<boolean>;
}

Node Properties

// Unique identifier for the node
{
  name: 'validation_step',
  agent: validatorAgent
}

Conditional Branching

Basic Branching

const classifier = new LlmAgent({
  name: 'classifier',
  description: 'Classifies input',
  instruction: 'Classify as urgent or normal',
  outputKey: 'priority',
  model: 'gemini-2.5-flash'
});

const urgentHandler = new LlmAgent({
  name: 'urgent',
  description: 'Handles urgent cases',
  instruction: 'Handle urgent case',
  model: 'gpt-4'
});

const normalHandler = new LlmAgent({
  name: 'normal',
  description: 'Handles normal cases',
  instruction: 'Handle normal case',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  {
    name: 'classify',
    agent: classifier,
    targets: ['urgent', 'normal']
  },
  {
    name: 'urgent',
    agent: urgentHandler,
    condition: async (lastEvent, context) => {
      const priority = context.state.get('priority', '');
      return priority.toLowerCase().includes('urgent');
    }
  },
  {
    name: 'normal',
    agent: normalHandler,
    condition: async (lastEvent, context) => {
      const priority = context.state.get('priority', '');
      return priority.toLowerCase().includes('normal');
    }
  }
];

const { runner } = await AgentBuilder
  .create('priority-routing')
  .asLangGraph(nodes, 'classify')
  .build();

Multi-Way Branching

const router = new LlmAgent({
  name: 'router',
  description: 'Routes to appropriate handler',
  instruction: 'Determine request type: technical, billing, or general',
  outputKey: 'request_type',
  model: 'gemini-2.5-flash'
});

const technicalAgent = new LlmAgent({
  name: 'technical',
  description: 'Handles technical issues',
  model: 'gpt-4'
});

const billingAgent = new LlmAgent({
  name: 'billing',
  description: 'Handles billing issues',
  model: 'gemini-2.5-flash'
});

const generalAgent = new LlmAgent({
  name: 'general',
  description: 'Handles general inquiries',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  {
    name: 'route',
    agent: router,
    targets: ['technical', 'billing', 'general']
  },
  {
    name: 'technical',
    agent: technicalAgent,
    condition: async (lastEvent, context) => {
      const type = context.state.get('request_type', '');
      return type.includes('technical');
    }
  },
  {
    name: 'billing',
    agent: billingAgent,
    condition: async (lastEvent, context) => {
      const type = context.state.get('request_type', '');
      return type.includes('billing');
    }
  },
  {
    name: 'general',
    agent: generalAgent,
    condition: async (lastEvent, context) => {
      const type = context.state.get('request_type', '');
      return type.includes('general');
    }
  }
];

Loops and Cycles

Simple Loop

const processor = new LlmAgent({
  name: 'processor',
  description: 'Processes data',
  instruction: 'Process the data and increment counter',
  outputKey: 'iterations',
  model: 'gemini-2.5-flash'
});

const checker = new LlmAgent({
  name: 'checker',
  description: 'Checks if done',
  instruction: 'Check if iterations >= 3',
  outputKey: 'done',
  model: 'gemini-2.5-flash'
});

const completer = new LlmAgent({
  name: 'completer',
  description: 'Completes workflow',
  instruction: 'Finalize the result',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  {
    name: 'process',
    agent: processor,
    targets: ['check']
  },
  {
    name: 'check',
    agent: checker,
    targets: ['process', 'complete'] // Can loop back or exit
  },
  {
    name: 'complete',
    agent: completer,
    condition: async (lastEvent, context) => {
      const iterations = context.state.get('iterations', 0);
      return iterations >= 3; // Exit condition
    }
  }
];

const { runner } = await AgentBuilder
  .create('loop-workflow')
  .asLangGraph(nodes, 'process')
  .withQuickSession({ state: { iterations: 0 } })
  .build();
Conditions on the check node’s targets (pointing back to process) control whether the loop continues. The condition on the complete node determines when to exit.

Retry Pattern

const executor = new LlmAgent({
  name: 'executor',
  description: 'Executes task',
  instruction: 'Execute the task',
  outputKey: 'result',
  model: 'gemini-2.5-flash'
});

const validator = new LlmAgent({
  name: 'validator',
  description: 'Validates result',
  instruction: 'Validate the result. Set success=true if valid.',
  outputKey: 'success',
  model: 'gemini-2.5-flash'
});

const success = new LlmAgent({
  name: 'success',
  description: 'Success handler',
  instruction: 'Handle successful result',
  model: 'gemini-2.5-flash'
});

const failure = new LlmAgent({
  name: 'failure',
  description: 'Failure handler',
  instruction: 'Handle failed result',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  {
    name: 'execute',
    agent: executor,
    targets: ['validate']
  },
  {
    name: 'validate',
    agent: validator,
    targets: ['execute', 'success', 'failure']
  },
  {
    name: 'success',
    agent: success,
    condition: async (lastEvent, context) => {
      return context.state.get('success', false) === true;
    }
  },
  {
    name: 'failure',
    agent: failure,
    condition: async (lastEvent, context) => {
      const attempts = context.state.get('attempts', 0);
      return attempts >= 3; // Max retries
    }
  }
];

Max Steps Protection

Prevent infinite loops with maxSteps:
const workflow = new LangGraphAgent({
  name: 'workflow',
  description: 'Workflow with protection',
  nodes: nodes,
  rootNode: 'start',
  maxSteps: 50 // Stops after 50 node executions
});

// Update max steps
workflow.setMaxSteps(100);

// Get current max steps
const max = workflow.getMaxSteps();
Always set maxSteps when using loops or cycles. Default is 50, but adjust based on your workflow complexity.

Error Handling

const processor = new LlmAgent({
  name: 'processor',
  description: 'Processes data',
  instruction: 'Process data',
  outputKey: 'result',
  model: 'gemini-2.5-flash'
});

const validator = new LlmAgent({
  name: 'validator',
  description: 'Validates result',
  instruction: 'Validate result. Set error=true if invalid.',
  outputKey: 'error',
  model: 'gemini-2.5-flash'
});

const errorHandler = new LlmAgent({
  name: 'error_handler',
  description: 'Handles errors',
  instruction: 'Handle the error',
  model: 'gemini-2.5-flash'
});

const successHandler = new LlmAgent({
  name: 'success_handler',
  description: 'Handles success',
  instruction: 'Finalize success',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  {
    name: 'process',
    agent: processor,
    targets: ['validate']
  },
  {
    name: 'validate',
    agent: validator,
    targets: ['error', 'success']
  },
  {
    name: 'error',
    agent: errorHandler,
    condition: async (lastEvent, context) => {
      return context.state.get('error', false) === true;
    }
  },
  {
    name: 'success',
    agent: successHandler,
    condition: async (lastEvent, context) => {
      return context.state.get('error', false) === false;
    }
  }
];

State Management

State is shared across all nodes in the graph:
const collector = new LlmAgent({
  name: 'collector',
  description: 'Collects data',
  instruction: 'Collect data points',
  outputKey: 'data_points',
  model: 'gemini-2.5-flash'
});

const analyzer = new LlmAgent({
  name: 'analyzer',
  description: 'Analyzes data',
  instruction: 'Analyze: {data_points}',
  outputKey: 'analysis',
  model: 'gemini-2.5-flash'
});

const reporter = new LlmAgent({
  name: 'reporter',
  description: 'Creates report',
  instruction: 'Create report from: {analysis}',
  model: 'gemini-2.5-flash'
});

const nodes: LangGraphNode[] = [
  { name: 'collect', agent: collector, targets: ['analyze'] },
  { name: 'analyze', agent: analyzer, targets: ['report'] },
  { name: 'report', agent: reporter }
];

const { runner } = await AgentBuilder
  .create('data-pipeline')
  .asLangGraph(nodes, 'collect')
  .withQuickSession({ state: {} })
  .build();

Accessing Graph Information

import { LangGraphAgent } from '@iqai/adk';

const { agent } = await AgentBuilder
  .create('graph')
  .asLangGraph(nodes, 'start')
  .build();

// Cast to access LangGraphAgent methods
const graphAgent = agent as LangGraphAgent;

// Get all nodes
const allNodes = graphAgent.getNodes();
// Returns: LangGraphNode[]

// Get specific node
const node = graphAgent.getNode('process');
// Returns: LangGraphNode | undefined

// Get root node name
const rootName = graphAgent.getRootNodeName();
// Returns: string

// Get max steps
const maxSteps = graphAgent.getMaxSteps();
// Returns: number

// Update max steps
graphAgent.setMaxSteps(100);

Execution Results

import { LangGraphAgent } from '@iqai/adk';

const graphAgent = agent as LangGraphAgent;

// Execute workflow
await runner.ask('Process this');

// Get execution history
const results = graphAgent.getExecutionResults();
// Returns: Array<{ node: string; events: Event[] }>

for (const { node, events } of results) {
  console.log(`Node ${node} generated ${events.length} events`);
}

// Clear execution history
graphAgent.clearExecutionHistory();

Complex Example: Order Processing

import { AgentBuilder, LlmAgent, type LangGraphNode } from '@iqai/adk';
import { z } from 'zod';

// Define agents
const validateOrder = new LlmAgent({
  name: 'validate',
  description: 'Validates order',
  instruction: 'Validate the order data',
  outputKey: 'validation',
  outputSchema: z.object({
    isValid: z.boolean(),
    errors: z.array(z.string())
  }),
  model: 'gemini-2.5-flash'
});

const checkInventory = new LlmAgent({
  name: 'inventory',
  description: 'Checks inventory',
  instruction: 'Check if items are in stock',
  outputKey: 'inventory',
  model: 'gemini-2.5-flash'
});

const processPayment = new LlmAgent({
  name: 'payment',
  description: 'Processes payment',
  instruction: 'Process the payment',
  outputKey: 'payment_status',
  model: 'gemini-2.5-flash'
});

const fulfillOrder = new LlmAgent({
  name: 'fulfill',
  description: 'Fulfills order',
  instruction: 'Fulfill the order',
  model: 'gemini-2.5-flash'
});

const handleError = new LlmAgent({
  name: 'error',
  description: 'Handles errors',
  instruction: 'Handle order error',
  model: 'gemini-2.5-flash'
});

const notifyCustomer = new LlmAgent({
  name: 'notify',
  description: 'Notifies customer',
  instruction: 'Send notification to customer',
  model: 'gemini-2.5-flash'
});

// Define graph
const nodes: LangGraphNode[] = [
  {
    name: 'validate',
    agent: validateOrder,
    targets: ['inventory', 'error']
  },
  {
    name: 'inventory',
    agent: checkInventory,
    condition: async (lastEvent, context) => {
      const validation = context.state.get('validation');
      return validation?.isValid === true;
    },
    targets: ['payment', 'error']
  },
  {
    name: 'payment',
    agent: processPayment,
    condition: async (lastEvent, context) => {
      const inventory = context.state.get('inventory');
      return inventory?.inStock === true;
    },
    targets: ['fulfill', 'error']
  },
  {
    name: 'fulfill',
    agent: fulfillOrder,
    condition: async (lastEvent, context) => {
      const payment = context.state.get('payment_status');
      return payment?.success === true;
    },
    targets: ['notify']
  },
  {
    name: 'notify',
    agent: notifyCustomer
  },
  {
    name: 'error',
    agent: handleError,
    condition: async (lastEvent, context) => {
      const validation = context.state.get('validation');
      const inventory = context.state.get('inventory');
      const payment = context.state.get('payment_status');
      
      return (
        validation?.isValid === false ||
        inventory?.inStock === false ||
        payment?.success === false
      );
    }
  }
];

// Create workflow
const { runner } = await AgentBuilder
  .create('order-processing')
  .asLangGraph(nodes, 'validate')
  .withQuickSession({ state: {} })
  .build();

// Process order
const result = await runner.ask(JSON.stringify({
  items: ['item1', 'item2'],
  customer: '[email protected]',
  payment: { method: 'card', amount: 99.99 }
}));

Graph Validation

LangGraph validates the graph structure on construction:
// This will throw an error
const nodes: LangGraphNode[] = [
  {
    name: 'node1',
    agent: agent1,
    targets: ['nonexistent'] // Error: target doesn't exist
  }
];

const workflow = new LangGraphAgent({
  name: 'invalid',
  description: 'Invalid graph',
  nodes,
  rootNode: 'missing' // Error: root node doesn't exist
});
Validation checks:
  • All target nodes exist
  • Root node exists in nodes array
  • No duplicate node names

Best Practices

1
Design Clear Conditions
2
Make conditions explicit and robust:
3
// Good
condition: async (lastEvent, context) => {
  const value = context.state.get('status');
  if (!value || typeof value !== 'string') {
    return false;
  }
  return value === 'success';
}

// Less robust
condition: async (lastEvent, context) => {
  return context.state.get('status') === 'success';
}
4
Use Descriptive Node Names
5
// Good
const nodes = [
  { name: 'validate_input', agent: validator },
  { name: 'process_payment', agent: processor },
  { name: 'handle_payment_error', agent: errorHandler }
];

// Less clear
const nodes = [
  { name: 'step1', agent: validator },
  { name: 'step2', agent: processor },
  { name: 'error', agent: errorHandler }
];
6
Always Set Max Steps
7
// Good - prevents infinite loops
const workflow = new LangGraphAgent({
  name: 'workflow',
  description: 'Safe workflow',
  nodes,
  rootNode: 'start',
  maxSteps: 50
});
8
Use Output Keys for Data Flow
9
const step1 = new LlmAgent({
  name: 'step1',
  description: 'Step 1',
  outputKey: 'step1_result', // Save to state
  model: 'gemini-2.5-flash'
});

const step2 = new LlmAgent({
  name: 'step2',
  description: 'Step 2',
  instruction: 'Use data: {step1_result}', // Read from state
  model: 'gemini-2.5-flash'
});
10
Document Complex Conditions
11
{
  name: 'retry',
  agent: retryAgent,
  condition: async (lastEvent, context) => {
    // Retry if:
    // 1. Not successful
    // 2. Fewer than 3 attempts
    // 3. Error is retryable
    const success = context.state.get('success', false);
    const attempts = context.state.get('attempts', 0);
    const errorType = context.state.get('errorType', '');
    
    return (
      !success &&
      attempts < 3 &&
      ['timeout', 'network'].includes(errorType)
    );
  }
}

Comparison with Other Workflows

FeatureLangGraphSequentialParallelLoop
BranchingYesNoNoNo
LoopsYesNoNoYes
ConditionsYesNoNoVia escalate
ComplexityHighLowLowMedium
Use CaseComplex workflowsPipelinesConcurrent tasksIteration

Next Steps

Workflow Agents

Overview of all workflow patterns

State Management

Managing state in workflows

Examples

LangGraph examples

LoopAgent

Simpler iterative patterns

Build docs developers (and LLMs) love