Skip to main content
Agents are the core unit of intelligence in Agentic Patterns. Each agent wraps an LLM and a system prompt, exposing a consistent interface for workflows and orchestrators to invoke.

BaseAgent

BaseAgent is the foundation all other agents build on. It takes an LLM and a system_prompt, builds a ChatPromptTemplate internally, and exposes a single invoke(user_input) method.
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.prompts import ChatPromptTemplate

class BaseAgent:
    def __init__(
        self,
        llm: BaseChatModel,
        system_prompt: str = "You are a helpful AI assistant.",
        agent_name: str = "BaseAgent"
    ):
        self.llm = llm
        self.system_prompt = system_prompt
        self.agent_name = agent_name
        self.prompt_template = ChatPromptTemplate.from_messages([
            ("system", self.system_prompt),
            ("human", "{input}")
        ])

    def invoke(self, user_input: str, **kwargs) -> str:
        chain = self.prompt_template | self.llm
        response = chain.invoke({"input": user_input, **kwargs})
        return response.content
The chain is assembled with | (LCEL pipe syntax): the template formats the messages, the LLM receives them, and response.content is returned as a plain string.

Specialized agents

PlanningAgent breaks a high-level task into an ordered list of steps. Its system prompt instructs the LLM to return only a JSON object with a plan key.
from agents.planning_agent import PlanningAgent
from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3")
planner = PlanningAgent(llm=llm)

steps = planner.generate_plan("Research the capital of Andorra and summarize findings.")
# steps -> ["Step 1: Search Wikipedia for Andorra", "Step 2: Extract capital city", ...]
generate_plan() calls self.invoke(task), strips any markdown fences, then parses the JSON:
def generate_plan(self, task: str) -> List[str]:
    response_text = self.invoke(task)
    clean_text = response_text.replace("```json", "").replace("```", "").strip()
    parsed_data = json.loads(clean_text)
    return parsed_data.get("plan", [response_text])
If JSON parsing fails, the raw LLM response is returned as a single-element list so execution can still proceed.

The Plan-Execute-Monitor loop

All three specialized agents participate in a recurring feedback loop that drives multi-step task completion:
1

Plan

PlanningAgent.generate_plan(task) converts a high-level objective into an ordered List[str] of step descriptions.
2

Execute

ExecutionAgent.execute_step(step, context) carries out the current step, optionally using tools via a ReAct loop. Accumulated results from previous steps are passed as context.
3

Monitor

MonitoringAgent.evaluate(objective, result) checks whether the step’s output satisfied its objective and returns structured {success, feedback}.
4

Retry or advance

On failure, feedback is appended to the context and the executor retries (up to max_retries). On success, the result is appended to context and the loop advances to the next step.
This loop is implemented by both SequentialWorkflow and LangGraphOrchestrator — see Workflows and Orchestration for details on how each coordinates these agents.

Agent summary

BaseAgent

Foundation class. Takes llm + system_prompt. Exposes invoke(user_input). Uses ChatPromptTemplate + LCEL internally.

PlanningAgent

Extends BaseAgent. generate_plan(task) returns List[str] by parsing JSON {"plan": [...]} from the LLM.

ExecutionAgent

Extends BaseAgent. execute_step(step, context) runs steps with optional tools via create_react_agent.

MonitoringAgent

Extends BaseAgent. evaluate(objective, result) returns {success: bool, feedback: str} parsed from JSON.

LocalAgent

Standalone summarization agent. Same invoke() interface as BaseAgent. Designed for local LLMs (Ollama).

Build docs developers (and LLMs) love