Skip to main content
Agentic Patterns provides a collection of composable building blocks for constructing multi-step AI agent pipelines. Whether you need a simple plan-and-execute loop or a full LangGraph-powered state machine with retry logic, you can assemble production-ready agentic systems from well-defined components.

Quickstart

Install the package and run your first agentic workflow in minutes.

Core Concepts

Understand agents, workflows, orchestration, memory, and tools.

Guides

Step-by-step guides for building common agentic patterns.

API Reference

Full API documentation for every class and method.

What’s included

Modular Agents

PlanningAgent, ExecutionAgent, MonitoringAgent, and LocalAgent — each with a focused role and a clean interface.

Workflow Patterns

SequentialWorkflow for ordered execution and ParallelWorkflow for concurrent multi-task processing.

LangGraph Orchestration

A LangGraph state machine that coordinates planning, execution, and monitoring with automatic retry logic.

Short-Term Memory

SQLite-backed memory with exact-match caching to avoid redundant LLM calls across sessions.

Context Compression

CompressContextTool reduces prompt size locally — no extra LLM call required.

Local & Cloud LLMs

Works with Ollama for local models and any OpenAI-compatible API for cloud inference.

How it works

1

Initialize an LLM

Configure a local Ollama model or a cloud OpenAI-compatible endpoint via environment variables.
2

Instantiate agents

Create PlanningAgent, ExecutionAgent, and MonitoringAgent with your chosen LLM.
3

Choose a workflow or orchestrator

Use SequentialWorkflow for ordered pipelines, ParallelWorkflow for concurrent tasks, or LangGraphOrchestrator for a full state-machine with retries.
4

Run and observe

Call .run(task=...) and get structured results. Successful outcomes are automatically cached in SQLite for future re-use.

Build docs developers (and LLMs) love