Custom Reasoning Engine
Reasoning engines control how agents think, plan, and execute tasks. While AgentLIB provides built-in engines like ReAct, Planner, and Chain-of-Thought, you can create custom engines to implement specialized behaviors.Understanding Reasoning Engines
A reasoning engine is responsible for:- Orchestrating the conversation flow
- Deciding when to call tools
- Generating the final response
- Tracking reasoning steps for observability
ReasoningEngine interface:
interface ReasoningEngine<TData = unknown> {
readonly name: string
execute(rCtx: ReasoningContext<TData>): Promise<string>
}
The Reasoning Context
Engines receive aReasoningContext with everything needed to execute:
interface ReasoningContext<TData = unknown> {
ctx: ExecutionContext<TData> // The full execution context
model: ModelProvider // The configured LLM provider
tools: ToolRegistry // All registered tools
policy: AgentPolicy // Constraints (maxSteps, etc.)
systemPrompt?: string // Agent's system prompt
// Methods
pushStep(step: ReasoningStep): void // Record a reasoning step
callTool(name: string, args: Record<string, unknown>, callId: string): Promise<unknown>
}
Minimal Custom Engine
Here’s the simplest possible custom engine from the AgentLIB examples:import { createAgent, ReasoningEngine, ReasoningContext } from '@agentlib/core'
import { openai } from '@agentlib/openai'
const myEngine: ReasoningEngine = {
name: 'my-engine',
async execute(rCtx: ReasoningContext) {
console.log('--- Custom engine executing ---')
// Call the model with current conversation
const response = await rCtx.model.complete({
messages: rCtx.ctx.state.messages
})
// Record the response as a step
rCtx.pushStep({
type: 'response',
content: response.message.content,
engine: 'my-engine'
})
// Return the final output
return response.message.content
}
}
const agent = createAgent({ name: 'custom-agent' })
.provider(openai({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o'
}))
.reasoning(myEngine)
const result = await agent.run('Hello!')
console.log('Final output:', result.output)
Example 1: Guided Prompting Engine
An engine that enforces structured prompting:import type { ReasoningEngine, ReasoningContext } from '@agentlib/core'
interface GuidedPromptConfig {
template: string
maxRounds: number
}
class GuidedPromptEngine implements ReasoningEngine {
name = 'guided-prompt'
constructor(private config: GuidedPromptConfig) {}
async execute(rCtx: ReasoningContext): Promise<string> {
const { ctx, model, systemPrompt } = rCtx
const { template, maxRounds } = this.config
// Build structured prompt
const guidedPrompt = template
.replace('{input}', ctx.input)
.replace('{context}', this.buildContext(ctx))
// Add system message with guided structure
const messages = [
{ role: 'system' as const, content: systemPrompt || 'You are a helpful assistant.' },
{ role: 'user' as const, content: guidedPrompt },
...ctx.state.messages.filter(m => m.role !== 'system')
]
rCtx.pushStep({
type: 'thought',
content: `Using guided template: ${template.substring(0, 50)}...`,
engine: this.name
})
// Get response
const response = await model.complete({ messages })
rCtx.pushStep({
type: 'response',
content: response.message.content,
engine: this.name
})
return response.message.content
}
private buildContext(ctx: any): string {
const recentMessages = ctx.state.messages.slice(-3)
return recentMessages.map(m => `${m.role}: ${m.content}`).join('\n')
}
}
const agent = createAgent({ name: 'guided-agent' })
.provider(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.reasoning(new GuidedPromptEngine({
template: `Task: {input}\n\nContext:\n{context}\n\nProvide a structured response with:\n1. Analysis\n2. Solution\n3. Next Steps`,
maxRounds: 5
}))
Example 2: Self-Correcting Engine
An engine that reviews and improves its own output:import type { ReasoningEngine, ReasoningContext } from '@agentlib/core'
class SelfCorrectingEngine implements ReasoningEngine {
name = 'self-correcting'
constructor(private maxIterations = 3) {}
async execute(rCtx: ReasoningContext): Promise<string> {
const { ctx, model, systemPrompt } = rCtx
let currentAnswer = ''
let iteration = 0
while (iteration < this.maxIterations) {
iteration++
// Generate answer
rCtx.pushStep({
type: 'thought',
content: `Iteration ${iteration}: Generating answer`,
engine: this.name
})
const messages = [
{ role: 'system' as const, content: systemPrompt || 'You are a helpful assistant.' },
...ctx.state.messages,
...(currentAnswer ? [
{
role: 'assistant' as const,
content: `Previous attempt: ${currentAnswer}`
},
{
role: 'user' as const,
content: 'Review and improve your previous answer.'
}
] : [])
]
const response = await model.complete({ messages })
currentAnswer = response.message.content
// Self-review
rCtx.pushStep({
type: 'thought',
content: `Reviewing answer quality...`,
engine: this.name
})
const review = await this.reviewAnswer(model, currentAnswer, ctx.input)
rCtx.pushStep({
type: 'reflection',
assessment: review.assessment,
needsRevision: review.needsRevision,
engine: this.name
})
if (!review.needsRevision) {
rCtx.pushStep({
type: 'thought',
content: `Answer approved after ${iteration} iteration(s)`,
engine: this.name
})
break
}
}
rCtx.pushStep({
type: 'response',
content: currentAnswer,
engine: this.name
})
return currentAnswer
}
private async reviewAnswer(
model: any,
answer: string,
originalInput: string
): Promise<{ assessment: string; needsRevision: boolean }> {
const reviewPrompt = `
Original Question: ${originalInput}
Proposed Answer: ${answer}
Review this answer for:
1. Accuracy
2. Completeness
3. Clarity
Respond with JSON: { "assessment": "...", "needsRevision": true/false }
`
const response = await model.complete({
messages: [{ role: 'user', content: reviewPrompt }]
})
try {
const review = JSON.parse(response.message.content)
return {
assessment: review.assessment || 'No assessment provided',
needsRevision: review.needsRevision ?? false
}
} catch {
return { assessment: 'Review parsing failed', needsRevision: false }
}
}
}
const agent = createAgent({ name: 'self-correcting-agent' })
.provider(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.reasoning(new SelfCorrectingEngine(3))
Example 3: Tool-First Engine
An engine that prioritizes tool usage over direct responses:import type { ReasoningEngine, ReasoningContext, ModelMessage } from '@agentlib/core'
class ToolFirstEngine implements ReasoningEngine {
name = 'tool-first'
constructor(private maxSteps = 10) {}
async execute(rCtx: ReasoningContext): Promise<string> {
const { ctx, model, tools, systemPrompt } = rCtx
let step = 0
let shouldContinue = true
while (shouldContinue && step < this.maxSteps) {
step++
const messages: ModelMessage[] = [
{
role: 'system',
content: systemPrompt || 'You are a tool-using assistant. Always use tools when available.'
},
...ctx.state.messages
]
// Always request tool schemas
const toolSchemas = tools.getSchemas()
rCtx.pushStep({
type: 'thought',
content: `Step ${step}: Checking for applicable tools (${toolSchemas.length} available)`,
engine: this.name
})
const response = await model.complete({
messages,
tools: toolSchemas
})
// Handle tool calls
if (response.toolCalls && response.toolCalls.length > 0) {
for (const toolCall of response.toolCalls) {
try {
const result = await rCtx.callTool(
toolCall.name,
toolCall.arguments,
toolCall.id
)
rCtx.pushStep({
type: 'thought',
content: `Tool ${toolCall.name} executed successfully`,
engine: this.name
})
} catch (error) {
rCtx.pushStep({
type: 'thought',
content: `Tool ${toolCall.name} failed: ${error}`,
engine: this.name
})
}
}
continue // Go to next iteration
}
// No more tools to call, return response
rCtx.pushStep({
type: 'response',
content: response.message.content,
engine: this.name
})
return response.message.content
}
return 'Maximum steps reached without resolution.'
}
}
const agent = createAgent({ name: 'tool-first-agent' })
.provider(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.tool(searchTool)
.tool(calculatorTool)
.reasoning(new ToolFirstEngine(8))
Example 4: Multi-Phase Engine
An engine that executes distinct phases:import type { ReasoningEngine, ReasoningContext } from '@agentlib/core'
enum Phase {
UNDERSTAND = 'understand',
RESEARCH = 'research',
SYNTHESIZE = 'synthesize',
RESPOND = 'respond'
}
class MultiPhaseEngine implements ReasoningEngine {
name = 'multi-phase'
async execute(rCtx: ReasoningContext): Promise<string> {
const phases = [
Phase.UNDERSTAND,
Phase.RESEARCH,
Phase.SYNTHESIZE,
Phase.RESPOND
]
const phaseOutputs = new Map<Phase, string>()
for (const phase of phases) {
rCtx.pushStep({
type: 'thought',
content: `Entering phase: ${phase}`,
engine: this.name
})
const output = await this.executePhase(phase, rCtx, phaseOutputs)
phaseOutputs.set(phase, output)
}
const finalResponse = phaseOutputs.get(Phase.RESPOND)!
rCtx.pushStep({
type: 'response',
content: finalResponse,
engine: this.name
})
return finalResponse
}
private async executePhase(
phase: Phase,
rCtx: ReasoningContext,
previousOutputs: Map<Phase, string>
): Promise<string> {
const { ctx, model } = rCtx
const phasePrompts = {
[Phase.UNDERSTAND]: `Analyze this request and identify key requirements: ${ctx.input}`,
[Phase.RESEARCH]: `Based on requirements: ${previousOutputs.get(Phase.UNDERSTAND)}, identify what information is needed.`,
[Phase.SYNTHESIZE]: `Combine findings: ${previousOutputs.get(Phase.RESEARCH)} into a coherent response strategy.`,
[Phase.RESPOND]: `Generate final response using strategy: ${previousOutputs.get(Phase.SYNTHESIZE)}`
}
const response = await model.complete({
messages: [
{ role: 'system', content: `You are in the ${phase} phase.` },
{ role: 'user', content: phasePrompts[phase] }
]
})
rCtx.pushStep({
type: 'thought',
content: `Phase ${phase} completed`,
engine: this.name
})
return response.message.content
}
}
const agent = createAgent({ name: 'multi-phase-agent' })
.provider(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.reasoning(new MultiPhaseEngine())
Reasoning Step Types
Engines can push different types of steps for observability:// Thought - Internal reasoning
rCtx.pushStep({
type: 'thought',
content: 'Analyzing user request...',
engine: 'my-engine'
})
// Plan - Structured task breakdown
rCtx.pushStep({
type: 'plan',
tasks: [
{ id: '1', description: 'Search for info', status: 'pending' },
{ id: '2', description: 'Summarize', status: 'pending', dependsOn: ['1'] }
],
engine: 'my-engine'
})
// Reflection - Self-assessment
rCtx.pushStep({
type: 'reflection',
assessment: 'The answer is accurate but could be more concise',
needsRevision: true,
engine: 'my-engine'
})
// Response - Final output
rCtx.pushStep({
type: 'response',
content: 'Here is my answer...',
engine: 'my-engine'
})
Using Custom Engines
Engines can be registered globally or used per-agent:import { createAgent, registerEngine } from '@agentlib/core'
// Option 1: Register globally
registerEngine('my-engine', myEngine)
const agent1 = createAgent({ name: 'agent1' })
.provider(model)
.reasoning('my-engine') // Use by name
// Option 2: Pass instance directly
const agent2 = createAgent({ name: 'agent2' })
.provider(model)
.reasoning(new CustomEngine()) // Use instance
Best Practices
- Always push steps - This provides visibility into the engine’s decision-making
- Respect policy limits - Check
rCtx.policy.maxStepsand other constraints - Handle errors gracefully - Wrap model and tool calls in try-catch blocks
- Return meaningful output - The final string should directly answer the user’s input
- Update message state - Add model responses to
ctx.state.messagesif needed for multi-turn - Use typed data - Leverage TypeScript generics for custom data types
Next Steps
- Explore Built-in Reasoning Strategies
- Learn about Multi-Agent Orchestration
- See Custom Middleware for lifecycle hooks