Overview
TheReflectEngine implements a three-phase loop: generate an initial answer, critique it using a separate model call, then revise based on the critique. This process repeats until the answer meets a quality threshold or max reflections are reached.
This engine prioritizes quality over speed and is ideal for tasks where accuracy matters more than response time.
When to Use
UseReflectEngine when:
- Accuracy and quality are more important than speed
- Answers require careful validation or fact-checking
- You want the model to self-correct mistakes
- Tasks involve writing, analysis, or decision-making that benefits from revision
- You need transparent quality assessment
Constructor
Configuration
Maximum number of reflection + revision cycles. Each cycle includes one critique and one revision attempt.
Score threshold (0-10) below which the engine revises. If the critique score is greater than or equal to this value, the answer is accepted as-is.
Maximum tool steps during initial answer generation and revisions.
Custom critique prompt. Should instruct the model to output a JSON object with
score, issues, suggestion, and needs_revision fields.Usage Example
How It Works
Phase 1: Generate Initial Answer
- Model receives the user input and generates an initial response
- Can use tools during generation (up to
maxAnswerSteps) - Initial answer is extracted and stored
Phase 2: Reflection Loop
For each reflection cycle (up tomaxReflections):
- Critique: Send answer to model with critique prompt
- Parse: Extract JSON critique with score, issues, and suggestions
- Evaluate: Check if score >=
acceptanceThresholdorneeds_revisionis false - Decide:
- If acceptable: break loop and return answer
- If not acceptable: proceed to revision
Phase 3: Revise
- Model receives original question, current answer, and critique
- Generates improved answer (may use tools)
- Updated answer becomes the current answer
- Loop continues with another critique
Critique Response Schema
The critique model should return JSON in this format:Step Emissions
- ThoughtStep: Emitted after initial answer generation and before each revision
- ReflectionStep: Emitted after each critique with assessment and revision decision
- ToolCallStep / ToolResultStep: Emitted during answer generation/revision tool usage
- ResponseStep: Emitted with the final accepted answer
Example Reflection Step
Default Critique Prompt
Error Handling
Max answer steps exceeded:Performance Considerations
- Slower than ReactEngine: Multiple model calls per reflection cycle
- Higher token usage: Critiques and revisions add overhead
- Quality tradeoff: Better answers at the cost of latency
String Alias
Implementation Reference
Source:packages/reasoning/src/engines/reflect.ts:65