Overview
TheContextManager trait handles prompt assembly by combining system prompts, relevant memories, and user messages. It manages context window budgets and compresses content when needed to fit within token limits.
Core responsibilities:
- Assemble full context from task + available memories
- Manage token budgets using chars-per-token estimation
- Compress context to fit within limits
- Format memory entries for LLM consumption
crates/oneclaw-core/src/orchestrator/context.rs
ContextManager Trait
Core trait for assembling and compressing prompt context.Methods
assemble()
Location: context.rs:11
Assembles full context from task and available context, respecting token budget.
Parameters:
task: &str- The user’s task or querybudget_tokens: usize- Maximum number of tokens allowed
Result<String>- The assembled context, truncated if over budget
compress()
Location: context.rs:13
Compresses context to fit within target token count.
Parameters:
context: &str- The context to compresstarget_tokens: usize- Target token count
Result<String>- Compressed context with marker if truncated
DefaultContextManager
Default implementation with system prompt and memory integration. Location:context.rs:28-96
Constructor
new()
Location: context.rs:36-41
Creates a new context manager with the given system prompt.
system_prompt- The system prompt for LLM calls
Methods
system_prompt()
Location: context.rs:44-46
Returns the system prompt.
build_context()
Location: context.rs:50-73
Builds full context string from memories and user message.
memories: &[String]- Relevant memory entries from searchuser_message: &str- The user’s query or messagebudget_tokens: usize- Maximum token budget
String- Formatted context with memory section + user message
- Estimates characters from token budget (tokens × 4)
- Iteratively adds memories until budget exhausted
- Adds truncation marker if memories exceed budget
- Always includes user message
Context Window Management
Location:context.rs:77-84
The assemble() implementation manages context windows by:
- Calculating character budget:
budget_tokens × chars_per_token - Truncating if over budget: Cuts at character limit
- Pass-through if under budget: Returns task unchanged
Compression Strategy
Location:context.rs:86-96
The compress() implementation uses simple truncation with marker:
- Calculate target characters:
target_tokens × chars_per_token - Check if compression needed: If content ≤ target, return unchanged
- Truncate with marker: Cut at (target - 20 chars) and append Vietnamese marker
NoopContextManager
Location:context.rs:17-25
No-operation context manager that passes through input unchanged.
Usage Examples
Basic Context Assembly
Building Context with Memories
Compressing Long Context
Integration with Provider
Budget-Aware Memory Integration
Token Budget Estimation
The defaultchars_per_token ratio is 4 characters per token, which is a rough average for English/Vietnamese mixed text.
Examples:
- 1000 tokens ≈ 4000 characters
- 2000 tokens ≈ 8000 characters
- 500 tokens ≈ 2000 characters
Vietnamese Language Support
The context manager includes Vietnamese-language formatting:- Memory section header: “Dữ liệu liên quan từ bộ nhớ:”
- Truncation marker: ”(… còn nữa nhưng đã cắt bớt)”
- User message label: “Câu hỏi/yêu cầu:”
- Compression marker: “(đã rút gọn)”
See Also
- ModelRouter - Routes requests to appropriate models
- ChainExecutor - Multi-step LLM reasoning
- Memory - Memory storage and retrieval
- Provider - LLM provider abstraction