When to Activate
- Spawning subagents that need codebase context they cannot predict upfront
- Building multi-agent workflows where context is progressively refined
- Encountering “context too large” or “missing context” failures in agent tasks
- Designing RAG-like retrieval pipelines for code exploration
- Optimizing token usage in agent orchestration
The Problem
Subagents are spawned with limited context. They don’t know:- Which files contain relevant code
- What patterns exist in the codebase
- What terminology the project uses
- Send everything: Exceeds context limits
- Send nothing: Agent lacks critical information
- Guess what’s needed: Often wrong
The Solution: Iterative Retrieval
A 4-phase loop that progressively refines context:// Start with high-level intent
const initialQuery = {
patterns: ['src/**/*.ts', 'lib/**/*.ts'],
keywords: ['authentication', 'user', 'session'],
excludes: ['*.test.ts', '*.spec.ts']
};
// Dispatch to retrieval agent
const candidates = await retrieveFiles(initialQuery);
function evaluateRelevance(files, task) {
return files.map(file => ({
path: file.path,
relevance: scoreRelevance(file.content, task),
reason: explainRelevance(file.content, task),
missingContext: identifyGaps(file.content, task)
}));
}
function refineQuery(evaluation, previousQuery) {
return {
// Add new patterns discovered in high-relevance files
patterns: [...previousQuery.patterns, ...extractPatterns(evaluation)],
// Add terminology found in codebase
keywords: [...previousQuery.keywords, ...extractKeywords(evaluation)],
// Exclude confirmed irrelevant paths
excludes: [...previousQuery.excludes, ...evaluation
.filter(e => e.relevance < 0.2)
.map(e => e.path)
],
// Target specific gaps
focusAreas: evaluation
.flatMap(e => e.missingContext)
.filter(unique)
};
}
async function iterativeRetrieve(task, maxCycles = 3) {
let query = createInitialQuery(task);
let bestContext = [];
for (let cycle = 0; cycle < maxCycles; cycle++) {
const candidates = await retrieveFiles(query);
const evaluation = evaluateRelevance(candidates, task);
// Check if we have sufficient context
const highRelevance = evaluation.filter(e => e.relevance >= 0.7);
if (highRelevance.length >= 3 && !hasCriticalGaps(evaluation)) {
return highRelevance;
}
// Refine and continue
query = refineQuery(evaluation, query);
bestContext = mergeContext(bestContext, highRelevance);
}
return bestContext;
}
Practical Examples
Example 1: Bug Fix Context
Example 2: Feature Implementation
Best Practices
- Start broad, narrow progressively - Don’t over-specify initial queries
- Learn codebase terminology - First cycle often reveals naming conventions
- Track what’s missing - Explicit gap identification drives refinement
- Stop at “good enough” - 3 high-relevance files beats 10 mediocre ones
- Exclude confidently - Low-relevance files won’t become relevant
Iterative retrieval solves the cold-start problem in multi-agent workflows by progressively refining context through evaluation feedback.