Overview
TheChainExecutor trait enables multi-step LLM reasoning pipelines. A Chain is a sequence of steps that can:
- Call LLMs with context from previous steps
- Search memory for relevant data
- Transform and format data
- Emit events to the event bus
- Call registered tools with parameters
crates/oneclaw-core/src/orchestrator/chain.rs
Core Types
Chain
A complete chain definition. Location:chain.rs:122-141
ChainStep
A single step in a chain. Location:chain.rs:57-119
llm()
Location: chain.rs:67-76
Creates an LLM call step with default max_tokens (300) and temperature (0.3).
memory_search()
Location: chain.rs:79-87
Creates a memory search step.
transform()
Location: chain.rs:90-97
Creates a transform/formatting step.
emit_event()
Location: chain.rs:100-107
Creates an event emission step.
tool_call()
Location: chain.rs:110-119
Creates a tool invocation step.
StepAction
Defines what a chain step does. Location:chain.rs:16-54
StepResult
Result of executing one step. Location:chain.rs:144-152
ChainResult
Result of executing an entire chain. Location:chain.rs:155-165
ChainContext
Context passed to chain execution - provides access to LLM, memory, event bus, and tools. Location:chain.rs:168-179
ChainExecutor Trait
Core trait for executing chains. Location:chain.rs:182-186
Methods
execute()
Executes a chain with the given initial input and context.
Parameters:
chain: &Chain- The chain to executeinitial_input: &str- Initial input for the first stepcontext: &ChainContext<'_>- Execution context with provider, memory, event bus, tools
Result<ChainResult>- Results of all steps plus final output and latency
DefaultChainExecutor
Default implementation that actually runs steps. Location:chain.rs:217-359
Constructor
Execution Logic
Location:chain.rs:229-359
The executor runs steps sequentially:
-
Initialize state:
current_input= initial inputstep_outputs= HashMap tracking each step’s outputstep_results= Vector of StepResult
-
For each step:
- Execute action (LLM call, memory search, transform, event, tool call)
- Record output and latency
- Update
current_inputwith step output - Store output in
step_outputs[index]
- Return ChainResult with all step results and final output
Step Execution Details
LLM Call Step
Location:chain.rs:246-264
- Substitutes
{input}and{step_N}in prompt template - Calls provider with system prompt and assembled prompt
- Returns LLM response or error message
- Handles offline mode gracefully
Memory Search Step
Location:chain.rs:266-286
- Substitutes variables in query template
- Searches memory with limit
- Formats results with timestamp and content
- Returns Vietnamese message if no results
Transform Step
Location:chain.rs:288-290
- Substitutes
{input}and{step_N}placeholders - Returns formatted output
Emit Event Step
Location:chain.rs:292-299
- Creates event with chain/step metadata
- Publishes to event bus
- Passes current input through unchanged
Tool Call Step
Location:chain.rs:301-325
- Resolves template variables in all param values
- Executes tool from registry
- Returns tool output or error message
- Handles missing registry gracefully
Template Substitution
Location:chain.rs:364-370
Chains use template substitution to pass data between steps:
{input}- Current input (output of previous step){step_0}- Output of step 0{step_1}- Output of step 1{step_N}- Output of step N
NoopChainExecutor
Location:chain.rs:189-214
No-operation executor that returns input as output.
Usage Examples
Simple Analysis Chain
Alert Detection Chain
Tool Integration Chain
Multi-Step Data Pipeline
Offline Mode Handling
Performance Monitoring
Chain execution provides detailed latency tracking:Error Handling
Chains handle errors gracefully:- LLM call fails: Returns
[Error] Step 'name' failed: error_message - Memory search fails: Returns
[Memory error] error_message - Tool call fails: Returns
[Tool error] tool_name: error_message - Tool not found: Returns
[Tool error] tool_name: error_message - No tool registry: Returns
[No tools] Tool registry not available - Offline mode: Returns
[Offline] No provider configured
See Also
- ModelRouter - Routes requests to appropriate models
- ContextManager - Assembles prompts with memory context
- Memory - Memory storage and search
- EventBus - Event publishing and subscription
- Tool - Tool registration and execution
- Provider - LLM provider abstraction