Skip to main content

Overview

The Reflect engine implements a self-critique loop where the agent generates an answer, evaluates its quality, and revises if necessary. This iterative refinement process leads to higher quality outputs. This approach is particularly effective for:
  • Tasks requiring high-quality, polished outputs
  • Complex explanations or analyses
  • Scenarios where self-correction improves results
  • Content that benefits from iterative refinement

How It Works

  1. Generate: Agent produces an initial answer
  2. Reflect: Agent critiques its own answer, identifying strengths and weaknesses
  3. Evaluate: Agent scores the answer quality (0-10 scale)
  4. Revise: If below threshold, agent generates an improved version
  5. Repeat: Continue reflection-revision cycle until quality threshold is met or max reflections reached

Complete Example

This example shows a Reflect agent explaining a technical concept:
import "dotenv/config"

import {  createAgent, AgentInstance, ReasoningStep  } from '@agentlib/core'
import { openai } from '@agentlib/openai'
import { ReflectEngine } from '@agentlib/reasoning'

const model = openai({ apiKey: process.env['OPENAI_API_KEY']!, model: process.env['OPENAI_MODEL']!, baseURL: process.env['OPENAI_BASE_URL']! })

async function main() {
    const agent = createAgent({ name: 'reflect-agent' })
        .provider(model)
        .reasoning(new ReflectEngine({
            maxReflections: 2,
            acceptanceThreshold: 8,
        }))

    agent.on('step:reasoning', (step: ReasoningStep) => {
        if (step.type === 'reflection') {
            console.log(`🔍 Reflection: ${step.assessment}`)
            console.log(`   Needs revision: ${step.needsRevision}`)
        }
    })

    const result = await agent.run(
        'Explain the CAP theorem and its implications for distributed systems design.'
    )
    console.log('Final output:', result.output)
    console.log('\nSteps taken:', result.state.steps.length)
}

main().catch(console.error)

Configuration Options

  • maxReflections: Maximum number of reflection cycles (default: 3)
  • acceptanceThreshold: Quality score (0-10) required to accept the answer (default: 7)

Monitoring Reflections

Observe the self-critique process:
agent.on('step:reasoning', (step: ReasoningStep) => {
    if (step.type === 'reflection') {
        console.log('🔍 Reflection:', step.assessment)
        console.log('   Quality score:', step.score)
        console.log('   Needs revision:', step.needsRevision)
    }
})

Example Output

For the CAP theorem question:
🔍 Reflection: Initial answer covers basics but lacks concrete examples
   Needs revision: true

🔍 Reflection: Much better - includes examples and practical implications
   Needs revision: false

Final output: The CAP theorem states that distributed systems can only 
guarantee two of three properties: Consistency, Availability, and Partition 
tolerance. For example, systems like MongoDB prioritize CP (consistency and 
partition tolerance), while Cassandra prioritizes AP (availability and 
partition tolerance). This means architects must choose trade-offs based 
on their specific requirements...

Steps taken: 4

Quality Assessment

The reflection step typically evaluates:
  • Completeness of the answer
  • Accuracy of information
  • Clarity of explanation
  • Presence of concrete examples
  • Overall coherence and structure

When to Use Reflect

Use the Reflect engine when:
  • Output quality is more important than speed
  • Tasks benefit from self-critique and revision
  • You need explanations or content that is polished and thorough
  • The agent should iteratively improve its responses
  • Complex topics require careful consideration and refinement

Build docs developers (and LLMs) love