BaseAgent
BaseAgent is the foundation all other agents build on. It takes an LLM and a system_prompt, builds a ChatPromptTemplate internally, and exposes a single invoke(user_input) method.
| (LCEL pipe syntax): the template formats the messages, the LLM receives them, and response.content is returned as a plain string.
Specialized agents
- PlanningAgent
- ExecutionAgent
- MonitoringAgent
- LocalAgent
PlanningAgent breaks a high-level task into an ordered list of steps. Its system prompt instructs the LLM to return only a JSON object with a plan key.generate_plan() calls self.invoke(task), strips any markdown fences, then parses the JSON:The Plan-Execute-Monitor loop
All three specialized agents participate in a recurring feedback loop that drives multi-step task completion:Plan
PlanningAgent.generate_plan(task) converts a high-level objective into an ordered List[str] of step descriptions.Execute
ExecutionAgent.execute_step(step, context) carries out the current step, optionally using tools via a ReAct loop. Accumulated results from previous steps are passed as context.Monitor
MonitoringAgent.evaluate(objective, result) checks whether the step’s output satisfied its objective and returns structured {success, feedback}.SequentialWorkflow and LangGraphOrchestrator — see Workflows and Orchestration for details on how each coordinates these agents.
Agent summary
BaseAgent
Foundation class. Takes
llm + system_prompt. Exposes invoke(user_input). Uses ChatPromptTemplate + LCEL internally.PlanningAgent
Extends
BaseAgent. generate_plan(task) returns List[str] by parsing JSON {"plan": [...]} from the LLM.ExecutionAgent
Extends
BaseAgent. execute_step(step, context) runs steps with optional tools via create_react_agent.MonitoringAgent
Extends
BaseAgent. evaluate(objective, result) returns {success: bool, feedback: str} parsed from JSON.LocalAgent
Standalone summarization agent. Same
invoke() interface as BaseAgent. Designed for local LLMs (Ollama).