Skip to main content

ContextBuilder

The ContextBuilder class is responsible for assembling the complete context for LLM calls, including:
  • System prompt with identity, memory, and skills
  • Conversation history
  • Runtime metadata (time, channel info)
  • Media attachments (images)

Constructor

ContextBuilder(workspace: Path)
workspace
Path
required
The workspace directory containing memory, skills, and bootstrap files
Example:
from nanobot.agent.context import ContextBuilder
from pathlib import Path

workspace = Path("/home/user/workspace")
context = ContextBuilder(workspace)

Methods

build_system_prompt

def build_system_prompt(skill_names: list[str] | None = None) -> str
Build the complete system prompt from multiple sources.
skill_names
list[str] | None
default:"None"
Optional list of specific skills to include (currently unused, always skills are loaded automatically)
prompt
str
The complete system prompt text
System prompt includes:
  1. Identity: Core nanobot identity, runtime info, workspace paths
  2. Bootstrap files: AGENTS.md, SOUL.md, USER.md, TOOLS.md, IDENTITY.md
  3. Memory: Long-term memory from MEMORY.md
  4. Active skills: Skills marked with always=true
  5. Skills summary: List of all available skills for progressive loading
Example:
context = ContextBuilder(workspace)
prompt = context.build_system_prompt()
print(prompt[:200])  # Show first 200 chars

build_messages

def build_messages(
    history: list[dict[str, Any]],
    current_message: str,
    skill_names: list[str] | None = None,
    media: list[str] | None = None,
    channel: str | None = None,
    chat_id: str | None = None,
) -> list[dict[str, Any]]
Build the complete message list for an LLM call.
history
list[dict]
required
Previous conversation messages from session history
current_message
str
required
The current user message to process
skill_names
list[str] | None
default:"None"
Optional list of skills to include in system prompt
media
list[str] | None
default:"None"
List of media file paths (images) to include with the message
channel
str | None
default:"None"
Channel name for runtime context
chat_id
str | None
default:"None"
Chat ID for runtime context
messages
list[dict]
Complete message list ready for LLM provider
Example:
from nanobot.session.manager import SessionManager

context = ContextBuilder(workspace)
sessions = SessionManager(workspace)
session = sessions.get_or_create("cli:user123")

# Get recent history
history = session.get_history(max_messages=100)

# Build messages with image attachment
messages = context.build_messages(
    history=history,
    current_message="What's in this image?",
    media=["/path/to/screenshot.png"],
    channel="cli",
    chat_id="user123"
)

# Send to LLM
response = await provider.chat(messages=messages, ...)

add_assistant_message

def add_assistant_message(
    messages: list[dict[str, Any]],
    content: str | None,
    tool_calls: list[dict[str, Any]] | None = None,
    reasoning_content: str | None = None,
    thinking_blocks: list[dict] | None = None,
) -> list[dict[str, Any]]
Add an assistant message to the message list.
messages
list[dict]
required
The current message list
content
str | None
required
The assistant’s response text (can be None if only tool calls)
tool_calls
list[dict] | None
default:"None"
List of tool calls made by the assistant
reasoning_content
str | None
default:"None"
Reasoning content for models that support it (e.g., o1)
thinking_blocks
list[dict] | None
default:"None"
Extended thinking blocks for advanced reasoning models
messages
list[dict]
Updated message list with the new assistant message
Example:
messages = context.build_messages(history, "List files")

# Add assistant response with tool calls
messages = context.add_assistant_message(
    messages,
    content="I'll list the files in the current directory.",
    tool_calls=[{
        "id": "call_123",
        "type": "function",
        "function": {
            "name": "list_dir",
            "arguments": '{"path": "."}'
        }
    }]
)

add_tool_result

def add_tool_result(
    messages: list[dict[str, Any]],
    tool_call_id: str,
    tool_name: str,
    result: str,
) -> list[dict[str, Any]]
Add a tool execution result to the message list.
messages
list[dict]
required
The current message list
tool_call_id
str
required
The ID of the tool call this result corresponds to
tool_name
str
required
The name of the tool that was executed
result
str
required
The tool execution result (success or error message)
messages
list[dict]
Updated message list with the tool result
Example:
# Execute tool and add result
tool_result = await tools.execute("list_dir", {"path": "."})
messages = context.add_tool_result(
    messages,
    tool_call_id="call_123",
    tool_name="list_dir",
    result=tool_result
)

Bootstrap Files

The context builder looks for these files in the workspace root:
  • AGENTS.md: Instructions for multi-agent coordination
  • SOUL.md: Core personality and behavior guidelines
  • USER.md: User preferences and context
  • TOOLS.md: Tool usage guidelines and examples
  • IDENTITY.md: Custom identity overrides
These files are automatically included in the system prompt if they exist.

Runtime Context

Every user message includes runtime metadata:
[Runtime Context — metadata only, not instructions]
Current Time: 2026-03-06 14:30 (Thursday)
Timezone: PST
Channel: telegram
Chat ID: 12345
This metadata is tagged to prevent prompt injection attacks and provides the agent with current context.

Memory Integration

The context builder integrates with MemoryStore to include:
  • Long-term memory: Key facts from MEMORY.md
  • History log: Searchable conversation log in HISTORY.md

Skills Integration

The context builder integrates with SkillsLoader to:
  1. Load skills marked with always=true into every prompt
  2. Provide a summary of all available skills
  3. Let the agent progressively load skills using read_file tool
This approach balances context size with capability access.

Image Support

When media files are provided:
  1. Images are base64-encoded
  2. MIME type is detected
  3. Content is structured for multimodal models:
[
    {"type": "text", "text": "[Runtime Context...]"},
    {"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}},
    {"type": "text", "text": "What's in this image?"}
]

Architecture Notes

  • The context builder is stateless - it doesn’t store conversation state
  • All state management is delegated to SessionManager
  • System prompt is rebuilt on every message for consistency
  • Large tool results (>500 chars) are truncated when saving to session history

Build docs developers (and LLMs) love