Simple Read Operation
Let’s trace a simple calendar listing request.User Request
Execution Flow
plan_actions Node
LLM uses meta-tools to find and invoke the right tool:Step 1: Discover toolsStep 2: Invoke tool (skips get_tool_schema for simple tools)Updated State:
route_execution Node
Checks execution mode:
- No confirmation needed (read operation)
- No dependencies between actions
execution_mode = "parallel"execute_parallel Node
All tools already executed in
plan_actions, so this node:- Checks
resultsdict - Finds all actions already have results
- Skips execution and passes state through
Final Output
Write Operation with Confirmation
Let’s trace a delete operation that requires user confirmation.User Request
Execution Flow
plan_actions Node
LLM realizes it needs to:Step 2: Search for John’s meetingStep 3: Discover delete toolsStep 4: Mark delete for confirmationUpdated State:
- Search for the event
- Delete it (requires confirmation)
confirm_actions Node (Interrupt)
Graph pauses and creates confirmation message:Graph returns to user and waits for input.
Final Output
Sequential Execution with Dependencies
Let’s trace a complex request that requires sequential tool execution.User Request
Execution Flow
plan_actions Node
LLM plans a two-step workflow:Step 2: Plan to mark oldest (t3) completeLLM identifies t3 as oldest and creates a dependent action:Updated State:
- List tasks
- Mark the oldest one complete (depends on step 1)
route_execution Node
Checks for dependencies:
- Action
a2hasdepends_on: ["a1"]
execution_mode = "sequential"execute_sequential Node
Executes actions in dependency order:Topological sort:Execute a1: Already has result from plan_actions, skipExecute a2:Updated State:
Final Output
Multi-Turn Conversation
Let’s trace a conversation that requires clarification.Turn 1: User Request
Execution Flow
plan_actions Node
LLM realizes it needs more information:
- When should the meeting be?
- How long?
Output
Turn 2: User Clarifies
Execution Flow
plan_actions Node (with context)
LLM now has full context from conversation history:Updated State:
- Previous message: “Schedule a meeting with Sarah”
- Current message: “Tomorrow at 3pm for 1 hour”
Output
Parallel Execution
Let’s trace a request that can execute multiple tools in parallel.User Request
Execution Flow
plan_actions Node
LLM identifies two independent operations:Step 2: Discover and invoke email toolUpdated State:
- List calendar events
- List unread emails
route_execution Node
Checks for dependencies:
- Both actions have
depends_on: [] - No confirmation needed
execution_mode = "parallel"execute_parallel Node
All tools already executed in
plan_actionsSkips execution and passes state throughFinal Output
State Persistence
All of these examples use LangGraph’s checkpointer to maintain state:Multi-Turn Conversations
Conversation history persists across messages
Human-in-the-Loop
Graph can pause for confirmations and resume
Error Recovery
State can be rolled back if needed
Session Management
Each session has its own isolated state
Key Patterns
These examples demonstrate several key patterns:- Meta-tools reduce tokens: Only load tool schemas when needed
- Confirmation flow: Destructive operations pause for approval
- Sequential execution: Handle dependencies between actions
- Parallel execution: Speed up independent operations
- Multi-turn conversations: Use conversation history for context
- Natural language parsing: Convert user input to structured tool calls
Next Steps
Architecture
Understand the three-layer architecture
Graph Reasoning
Deep dive into LangGraph state machine
Tools System
Learn about tool registration and meta-tools