Overview
The Reflect engine implements a self-critique loop where the agent generates an answer, evaluates its quality, and revises if necessary. This iterative refinement process leads to higher quality outputs. This approach is particularly effective for:- Tasks requiring high-quality, polished outputs
- Complex explanations or analyses
- Scenarios where self-correction improves results
- Content that benefits from iterative refinement
How It Works
- Generate: Agent produces an initial answer
- Reflect: Agent critiques its own answer, identifying strengths and weaknesses
- Evaluate: Agent scores the answer quality (0-10 scale)
- Revise: If below threshold, agent generates an improved version
- Repeat: Continue reflection-revision cycle until quality threshold is met or max reflections reached
Complete Example
This example shows a Reflect agent explaining a technical concept:Configuration Options
maxReflections: Maximum number of reflection cycles (default: 3)acceptanceThreshold: Quality score (0-10) required to accept the answer (default: 7)
Monitoring Reflections
Observe the self-critique process:Example Output
For the CAP theorem question:Quality Assessment
The reflection step typically evaluates:- Completeness of the answer
- Accuracy of information
- Clarity of explanation
- Presence of concrete examples
- Overall coherence and structure
When to Use Reflect
Use the Reflect engine when:- Output quality is more important than speed
- Tasks benefit from self-critique and revision
- You need explanations or content that is polished and thorough
- The agent should iteratively improve its responses
- Complex topics require careful consideration and refinement