Skip to main content
Agentic Patterns gives you composable building blocks for constructing multi-step AI agent pipelines. Whether you need a simple plan-and-execute loop or a full LangGraph-powered state machine with automatic retry logic, you can assemble production-ready agentic systems from well-defined, interchangeable components.

Quickstart

Install dependencies and run your first agentic workflow in minutes.

Configuration

Configure LLMs, memory, agents, and tools for your environment.

What is Agentic Patterns?

Agentic Patterns is a Python library (agenticpatterns) that layers three architectural ideas on top of LangChain and LangGraph:
  1. Role-specific agents — discrete classes for planning, execution, monitoring, and local summarization.
  2. Workflow patterns — sequential and parallel orchestration strategies that coordinate agents.
  3. Supporting infrastructure — SQLite short-term memory with exact-match caching, context compression, and web search via curl.
All components are designed to work together or independently. You can drop a PlanningAgent into any LangChain chain, use SQLiteShortTermMemory for session persistence in a chatbot, or run the full LangGraphOrchestrator as a self-correcting pipeline.

Key architectural components

Planning agent

PlanningAgent breaks a high-level task into a structured JSON plan — a list of sequential steps passed to the executor.

Execution agent

ExecutionAgent carries out individual plan steps. It optionally wraps a LangGraph ReAct agent when tools such as CurlSearchTool are provided.

Monitoring agent

MonitoringAgent evaluates each step’s output against its original objective and returns structured JSON feedback with a success flag.

Local agent

LocalAgent is a lightweight summarization agent that aggressively condenses context to save tokens for downstream steps.

Sequential workflow

SequentialWorkflow runs planning → execution → monitoring in order, threading the output of each step as context into the next.

Parallel workflow

ParallelWorkflow executes independent tasks concurrently using a thread pool, returning all results when the batch completes.

LangGraph orchestrator

LangGraphOrchestrator builds a LangGraph StateGraph that coordinates planner, executor, and monitor nodes with configurable retry logic.

Short-term memory

SQLiteShortTermMemory persists session messages locally and provides exact-match cache lookups to skip redundant LLM calls.

CompressContextTool

Strips whitespace, removes common filler words, and truncates text locally — no LLM call required — before injecting context into prompts.

CurlSearchTool

Queries the Wikipedia OpenSearch API via curl and returns the top three snippets, giving agents access to factual web content.

Local and cloud LLM support

Agentic Patterns works with any BaseChatModel from LangChain. Two configurations are used out of the box:
  • Local (Ollama)ChatOllama connects to a locally running Ollama server. Set LOCAL_MODEL and OLLAMA_HOST in your .env file.
  • Cloud (OpenAI-compatible)ChatOpenAI with a custom base_url connects to any OpenAI-compatible endpoint such as Routeway or LLM API. Set SUMMARY_HOST, SUMMARY_MODEL, and SUMMARY_AGENT_API_KEY.
You can mix both: the main orchestration pipeline can use a fast local model while the monitoring agent uses a more capable cloud model.
Agents are LLM-agnostic. Any BaseChatModel instance — including Anthropic, Groq, or Mistral — can be passed to any agent constructor.

How it fits together

Task
  └─► PlanningAgent        → produces a list of steps
        └─► ExecutionAgent  → executes each step (optionally with tools)
              └─► MonitoringAgent → validates output; retries on failure
                    └─► SQLiteShortTermMemory → caches the final answer
The LangGraphOrchestrator encodes this loop as a LangGraph state machine. SequentialWorkflow implements the same pattern in plain Python. Both expose a single .run(task=...) method and return structured result dictionaries. Get started with the Quickstart.

Build docs developers (and LLMs) love