Skip to main content
ExecutionAgent extends BaseAgent and is responsible for carrying out individual steps produced by a PlanningAgent. When tools are provided it automatically creates a LangGraph create_react_agent executor; without tools it falls back to the plain invoke() chain.

Import

from agents import ExecutionAgent

Constructor

ExecutionAgent(
    llm: BaseChatModel,
    tools: List[Any] = [],
    system_prompt: str = "<diligent execution prompt>",  # see default below
    agent_name: str = "ExecutionAgent"
)
llm
BaseChatModel
required
The LangChain chat model to use for execution.
tools
List[Any]
default:"None"
A list of LangChain-compatible tool objects to bind to the agent. When this list is non-empty, ExecutionAgent creates an internal agent_executor using LangGraph’s create_react_agent. When None or empty, the agent uses the plain prompt-chain path.
system_prompt
str
Override the default system instruction. The built-in prompt reads:
You are a diligent execution agent. Your task is to complete the given
action step as described, providing a clear and detailed output of your
work. If you are provided with context from previous steps, use it to
inform your output.
agent_name
str
default:"ExecutionAgent"
Display name used in log messages. Inherited from BaseAgent.

Methods

execute_step

execute_step(step_description: str, context: str = "") -> str
Executes a single plan step. If context is provided it is appended to the prompt so the agent can reason about prior results. When an agent_executor exists (tools were supplied) the method uses it; on failure it falls back to invoke().
step_description
str
required
A plain-English description of the step to execute (typically one element from PlanningAgent.generate_plan()).
context
str
default:"(empty string)"
Accumulated output from previously executed steps. Pass this to allow the agent to build on earlier results.
Returns: str — the agent’s output for this step.
When tools are present, the system prompt is prepended directly to the user message to maintain compatibility across all LangGraph versions. If agent_executor.invoke() raises an exception the error is logged and the call falls through to the plain invoke() path.

invoke (inherited)

See BaseAgent.invoke().

Usage example

from langchain_ollama import ChatOllama
from agents import ExecutionAgent
from tools.curl_search_tool import CurlSearchTool

llm = ChatOllama(model="llama3")

# --- Without tools (plain LLM) ---
executor_plain = ExecutionAgent(llm=llm)

result = executor_plain.execute_step(
    step_description="Write a one-paragraph summary of the Rust programming language."
)
print(result)

# --- With tools (ReAct graph using CurlSearchTool) ---
curl_tool = CurlSearchTool()
executor_tools = ExecutionAgent(llm=llm, tools=[curl_tool])

result = executor_tools.execute_step(
    step_description="Find the capital city of Andorra.",
    context="We are verifying geographic facts."
)
print(result)
Chain multiple execute_step() calls together by passing the previous step’s return value as context for the next step.

Build docs developers (and LLMs) love