Skip to main content
All tools in Agentic Patterns extend LangChain’s BaseTool, making them compatible with create_react_agent and any LangChain tool runner out of the box.

CompressContextTool

CompressContextTool compresses a text string locally using regex — no LLM call required. It removes excess whitespace, strips common stop words, and truncates to a configurable max_length.
from tools.compress_context_tool import CompressContextTool

compressor = CompressContextTool(max_length=4000)
compressed = compressor._run(long_context_string)

Implementation

from langchain_core.tools import BaseTool
import re

class CompressContextTool(BaseTool):
    name: str = "compress_context"
    description: str = (
        "Compresses a long string of text locally to save context window space. "
        "Useful when you have downloaded a large document or webpage and need to "
        "extract the core information densely."
    )
    max_length: int = 4000

    def _run(self, text: str, max_length: int = None) -> str:
        # 1. Collapse whitespace
        compressed = re.sub(r'\s+', ' ', text).strip()

        # 2. Strip common stop words
        stop_words = {" a ", " an ", " the ", " is ", " are ", " was ",
                      " were ", " and ", " or ", " but "}
        for word in stop_words:
            compressed = re.sub(word, " ", compressed, flags=re.IGNORECASE)

        compressed = re.sub(r'\s+', ' ', compressed).strip()

        # 3. Truncate
        limit = max_length if max_length is not None else self.max_length
        if limit and len(compressed) > limit:
            compressed = compressed[:limit] + "... [TRUNCATED]"

        return compressed
CompressContextTool is purely heuristic and deterministic — it will not understand the meaning of text. For smarter compression that preserves semantic context, use LocalAgent with a local LLM instead.

CurlSearchTool

CurlSearchTool queries the Wikipedia search API using the system curl binary via subprocess. It returns the top 3 result snippets with HTML tags stripped.
from tools.curl_search_tool import CurlSearchTool

search = CurlSearchTool()
result = search._run("Capital of Andorra")
# "Result 1 (Andorra la Vella): Andorra la Vella is the capital city..."
# "Result 2 (Andorra): Andorra, officially ..."
# "Result 3 (...): ..."

Implementation

import subprocess, json, re
from langchain_core.tools import BaseTool

class CurlSearchTool(BaseTool):
    name: str = "curl_search"
    description: str = (
        "Uses the `curl` command-line utility to query a search engine (like Wikipedia) "
        "and retrieve concise answers or context. Input should be a specific search query."
    )

    def _run(self, query: str) -> str:
        url = (
            f"https://en.wikipedia.org/w/api.php"
            f"?action=query&list=search&srsearch={query}&utf8=&format=json"
        )
        user_agent = (
            "Mozilla/5.0 (Linux; U; Android 4.0.3; en-us; HTC Sensation Build/IML74K) "
            "AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30"
        )
        cmd = ["curl", "-s", "-L", "-A", user_agent, url]

        result = subprocess.run(cmd, capture_output=True, text=True, check=True)
        data = json.loads(result.stdout)
        search_results = data.get("query", {}).get("search", [])

        answers = []
        for i, item in enumerate(search_results[:3]):
            clean_snippet = re.sub(r'<[^>]+>', '', item.get("snippet", ""))
            answers.append(f"Result {i+1} ({item.get('title')}): {clean_snippet}")

        return "\n".join(answers)
CurlSearchTool requires curl to be installed on the host system and network access to en.wikipedia.org. It will raise subprocess.CalledProcessError if curl is not found.

LocalAgent as a compressor

When Ollama is available, LocalAgent can replace CompressContextTool for semantically-aware compression:
from langchain_ollama import ChatOllama
from agents.local_agent import LocalAgent
from tools.compress_context_tool import CompressContextTool

LOCAL_MODEL = "llama3"  # or read from os.getenv("LOCAL_MODEL")

if LOCAL_MODEL:
    try:
        active_compressor = LocalAgent(llm=ChatOllama(model=LOCAL_MODEL))
    except Exception:
        active_compressor = CompressContextTool(max_length=10000)
else:
    active_compressor = CompressContextTool(max_length=10000)
Both LocalAgent (via invoke()) and CompressContextTool (via _run()) are duck-type compatible with the compression calls in SequentialWorkflow and LangGraphOrchestrator:
# From sequential_workflow.py and langgraph_orchestrator.py
if hasattr(compressor, 'invoke'):
    current_context = compressor.invoke(current_context)   # LocalAgent path
elif hasattr(compressor, '_run'):
    current_context = compressor._run(current_context)     # CompressContextTool path

Plugging tools into agents

Pass a tools list to ExecutionAgent at construction time. When tools are present, the agent uses LangGraph’s create_react_agent to run a ReAct loop:
from agents.execution_agent import ExecutionAgent
from tools.curl_search_tool import CurlSearchTool

executor = ExecutionAgent(
    llm=llm,
    tools=[CurlSearchTool()]
)
The tools list is passed directly to create_react_agent(self.llm, tools=self.tools, ...). Any BaseTool subclass works here.

Plugging tools into workflows

Pass tools as the second argument. tools[0] is used as the context compressor before each executor call:
from workflows.sequential_workflow import SequentialWorkflow
from tools.compress_context_tool import CompressContextTool
from tools.curl_search_tool import CurlSearchTool

workflow = SequentialWorkflow(
    agents={"planner": planner, "executor": executor, "monitor": monitor},
    tools=[CompressContextTool(max_length=10000), CurlSearchTool()]
)
tools[0] is the compressor; additional tools in the list are available but not automatically invoked by the workflow itself.

Tool summary

CompressContextTool

name="compress_context". Local regex-based compression. No LLM, no network. Truncates at max_length. Use when Ollama is unavailable.

CurlSearchTool

name="curl_search". Wikipedia API via subprocess curl. Returns top 3 snippets with HTML stripped. Requires curl and network access.

Build docs developers (and LLMs) love