Skip to main content

AgentLoop

The AgentLoop class is the core processing engine of nanobot. It handles the complete agent lifecycle:
  1. Receives messages from the message bus
  2. Builds context with history, memory, and skills
  3. Calls the LLM provider
  4. Executes tool calls
  5. Sends responses back through the bus

Constructor

AgentLoop(
    bus: MessageBus,
    provider: LLMProvider,
    workspace: Path,
    model: str | None = None,
    max_iterations: int = 40,
    temperature: float = 0.1,
    max_tokens: int = 4096,
    memory_window: int = 100,
    reasoning_effort: str | None = None,
    brave_api_key: str | None = None,
    web_proxy: str | None = None,
    exec_config: ExecToolConfig | None = None,
    cron_service: CronService | None = None,
    restrict_to_workspace: bool = False,
    session_manager: SessionManager | None = None,
    mcp_servers: dict | None = None,
    channels_config: ChannelsConfig | None = None,
)
bus
MessageBus
required
The message bus for receiving inbound messages and publishing outbound responses
provider
LLMProvider
required
The LLM provider interface for making chat completion calls
workspace
Path
required
The workspace directory path for file operations, memory, and sessions
model
str | None
default:"None"
Model name to use. If None, uses the provider’s default model
max_iterations
int
default:"40"
Maximum number of tool call iterations per request
temperature
float
default:"0.1"
Temperature parameter for LLM sampling (0.0-1.0)
max_tokens
int
default:"4096"
Maximum number of tokens in LLM response
memory_window
int
default:"100"
Number of messages to keep in active memory before consolidation
reasoning_effort
str | None
default:"None"
Reasoning effort level for supported providers (e.g., “medium”, “high”)
brave_api_key
str | None
default:"None"
API key for Brave Search integration
web_proxy
str | None
default:"None"
Proxy URL for web requests
exec_config
ExecToolConfig | None
default:"None"
Configuration for shell execution tool
cron_service
CronService | None
default:"None"
Cron service for scheduling tasks
restrict_to_workspace
bool
default:"False"
If True, restrict file operations to workspace directory only
session_manager
SessionManager | None
default:"None"
Session manager for conversation persistence. Creates default if None
mcp_servers
dict | None
default:"None"
Configuration dict for MCP (Model Context Protocol) servers
channels_config
ChannelsConfig | None
default:"None"
Channel-specific configuration

Methods

run

async def run() -> None
Run the agent loop, continuously consuming messages from the bus and dispatching them as tasks. Behavior:
  • Connects to MCP servers on startup
  • Stays responsive to /stop commands
  • Runs until stop() is called
Example:
from nanobot.agent.loop import AgentLoop
from nanobot.bus.queue import MessageBus
from nanobot.providers.anthropic import AnthropicProvider
from pathlib import Path

bus = MessageBus()
provider = AnthropicProvider(api_key="sk-...")
workspace = Path("/workspace")

loop = AgentLoop(bus, provider, workspace)
await loop.run()

process_direct

async def process_direct(
    content: str,
    session_key: str = "cli:direct",
    channel: str = "cli",
    chat_id: str = "direct",
    on_progress: Callable[[str], Awaitable[None]] | None = None,
) -> str
Process a message directly without using the message bus (useful for CLI or programmatic usage).
content
str
required
The message content to process
session_key
str
default:"'cli:direct'"
Session identifier for conversation history
channel
str
default:"'cli'"
Channel name for routing
chat_id
str
default:"'direct'"
Chat identifier
on_progress
Callable[[str], Awaitable[None]] | None
default:"None"
Optional callback for progress updates during processing
response
str
The agent’s response text
Example:
loop = AgentLoop(bus, provider, workspace)

# Simple usage
response = await loop.process_direct("What files are in the current directory?")
print(response)

# With progress callback
async def show_progress(message: str):
    print(f"Progress: {message}")

response = await loop.process_direct(
    "Analyze all Python files and create a summary",
    on_progress=show_progress
)

stop

def stop() -> None
Stop the agent loop gracefully. Causes run() to exit after processing current message. Example:
loop = AgentLoop(bus, provider, workspace)

# Start in background
import asyncio
task = asyncio.create_task(loop.run())

# Later, stop the loop
loop.stop()
await task

close_mcp

async def close_mcp() -> None
Close all MCP (Model Context Protocol) server connections. Example:
loop = AgentLoop(bus, provider, workspace, mcp_servers={...})
await loop.run()

# Cleanup
loop.stop()
await loop.close_mcp()

Built-in Tools

The AgentLoop automatically registers these default tools:
  • ReadFileTool: Read file contents
  • WriteFileTool: Write files
  • EditFileTool: Edit existing files
  • ListDirTool: List directory contents
  • ExecTool: Execute shell commands
  • WebSearchTool: Search the web (requires Brave API key)
  • WebFetchTool: Fetch web pages
  • MessageTool: Send messages to specific channels
  • SpawnTool: Spawn subagent tasks
  • CronTool: Schedule cron jobs (if cron_service provided)

Slash Commands

The agent recognizes these built-in commands:
  • /new - Start a new conversation (archives current session to memory)
  • /stop - Stop the current task and cancel all running operations
  • /help - Show available commands

Memory Consolidation

When the conversation exceeds the memory_window, the agent automatically:
  1. Consolidates old messages into long-term memory (MEMORY.md)
  2. Appends a summary to the history log (HISTORY.md)
  3. Keeps recent messages in active context
This happens asynchronously without blocking message processing.

Architecture

The agent loop uses a single processing lock to ensure messages are handled sequentially, preventing race conditions in:
  • Session state updates
  • Memory consolidation
  • Tool execution with side effects
This design ensures consistency while allowing background tasks like memory consolidation to run concurrently.

Build docs developers (and LLMs) love