Skip to main content

Overview

This example demonstrates advanced multi-step reasoning using AXON’s reason primitive, chain-of-thought reasoning, and self-healing validation. It shows how to break down complex problems into reasoning steps with confidence tracking and adaptive refinement.

Use Case

Solve complex problems requiring:
  • Multi-step logical reasoning
  • Chain-of-thought explanation
  • Recursive problem decomposition
  • Validation at each step
  • Self-correction when reasoning fails
  • Mathematical or logical proofs

Complete Code

multi_step_reasoning.axon
// AXON Example — Multi-Step Reasoning
// Complex problem solving with chain-of-thought and self-healing

persona LogicalReasoner {
  domain: ["logic", "mathematics", "problem solving"]
  tone: methodical
  confidence_threshold: 0.85
  chain_of_thought: true
}

context ReasoningMode {
  memory: session
  language: "en"
  depth: exhaustive
  max_tokens: 8192
  temperature: 0.4
}

anchor LogicalConsistency {
  require: logical_validity
  confidence_floor: 0.80
  unknown_response: "Unable to derive conclusion with certainty"
  on_violation: raise AnchorBreachError
}

type ReasoningStep {
  step_number: Integer,
  premise: FactualClaim,
  conclusion: FactualClaim,
  justification: FactualClaim,
  confidence: ConfidenceScore
}

type Contradiction {
  statement_a: FactualClaim,
  statement_b: FactualClaim,
  explanation: FactualClaim
}

type ReasoningChain {
  problem: FactualClaim,
  steps: List<ReasoningStep>,
  final_conclusion: FactualClaim,
  contradictions: List<Contradiction>?,
  overall_confidence: ConfidenceScore,
  reasoning_trace: List<FactualClaim>
}

flow SolveComplexProblem(problem: String) -> ReasoningChain {
  step DecomposeProblem {
    given: problem
    ask: "Break down this problem into smaller sub-problems"
    output: SubProblems
  }
  
  reason InitialAnalysis {
    given: problem
    about: "problem structure and requirements"
    ask: "What are the key elements and constraints?"
    depth: 3
    show_work: true
    chain_of_thought: true
    output: ProblemAnalysis
  }
  
  validate InitialAnalysis against AnalysisSchema {
    if confidence < 0.85 -> refine(max_attempts: 3)
    if incomplete -> warn "Analysis may be incomplete"
  }
  
  reason StepByStepSolution {
    given: [SubProblems, ProblemAnalysis]
    about: "solving each sub-problem"
    ask: "Solve each sub-problem step by step, showing all work"
    depth: 5
    show_work: true
    chain_of_thought: true
    output: SolutionSteps
  }
  
  step CheckConsistency {
    given: SolutionSteps
    ask: "Identify any logical contradictions or inconsistencies"
    output: ConsistencyCheck
  }
  
  validate ConsistencyCheck against LogicSchema {
    if contradictions_found -> refine(max_attempts: 2)
    if confidence < 0.80 -> refine(max_attempts: 1)
  }
  
  reason FinalSynthesis {
    given: [SolutionSteps, ConsistencyCheck]
    about: "combining sub-solutions"
    ask: "Synthesize sub-solutions into final conclusion"
    depth: 4
    show_work: true
    output: FinalConclusion
  }
  
  weave [
    ProblemAnalysis,
    SolutionSteps,
    ConsistencyCheck,
    FinalConclusion
  ] into ReasoningChain {
    format: StructuredReport
    priority: [reasoning_trace, final_conclusion, contradictions]
  }
  
  remember(ReasoningChain) -> ReasoningHistory
}

run SolveComplexProblem(inputProblem)
  as LogicalReasoner
  within ReasoningMode
  constrained_by [LogicalConsistency]
  on_failure: retry(backoff: exponential)
  output_to: "reasoning.json"
  effort: high

Key Components

Persona: LogicalReasoner

persona LogicalReasoner {
  domain: ["logic", "mathematics", "problem solving"]
  tone: methodical
  confidence_threshold: 0.85
  chain_of_thought: true
}
Defines a systematic reasoner:
  • Domains: Logic, mathematics, problem solving
  • Methodical tone: Step-by-step, careful
  • High threshold: 0.85 for sound reasoning
  • Chain-of-thought: Always show reasoning steps

Context: ReasoningMode

context ReasoningMode {
  memory: session
  language: "en"
  depth: exhaustive
  max_tokens: 8192
  temperature: 0.4
}
Configured for deep reasoning:
  • Session memory: Remember previous reasoning
  • Exhaustive depth: Thorough analysis
  • Large token budget: 8192 for complex reasoning
  • Moderate temperature: 0.4 balances consistency and creativity

Anchor: LogicalConsistency

anchor LogicalConsistency {
  require: logical_validity
  confidence_floor: 0.80
  unknown_response: "Unable to derive conclusion with certainty"
  on_violation: raise AnchorBreachError
}
Enforces logical rigor:
  • Requires: Logical validity (no invalid inferences)
  • High floor: 0.80 minimum confidence
  • Explicit uncertainty: Admit when conclusion uncertain
The LogicalConsistency anchor prevents invalid logical inferences. If the LLM attempts an unsound reasoning step, AXON’s self-healing runtime will retry with failure context.

Custom Types

type ReasoningStep {
  step_number: Integer,
  premise: FactualClaim,
  conclusion: FactualClaim,
  justification: FactualClaim,
  confidence: ConfidenceScore
}
Captures individual reasoning step:
  • Step number: Position in chain
  • Premise: Starting point (fact)
  • Conclusion: Derived result (fact)
  • Justification: Why conclusion follows (fact)
  • Confidence: How certain we are
type Contradiction {
  statement_a: FactualClaim,
  statement_b: FactualClaim,
  explanation: FactualClaim
}
Identifies logical contradictions.
type ReasoningChain {
  problem: FactualClaim,
  steps: List<ReasoningStep>,
  final_conclusion: FactualClaim,
  contradictions: List<Contradiction>?,
  overall_confidence: ConfidenceScore,
  reasoning_trace: List<FactualClaim>
}
Complete reasoning output:
  • All reasoning steps
  • Final conclusion
  • Any contradictions found
  • Overall confidence
  • Full reasoning trace

Flow: SolveComplexProblem

Six-step reasoning pipeline with validation: Step 1: DecomposeProblem
step DecomposeProblem {
  given: problem
  ask: "Break down this problem into smaller sub-problems"
  output: SubProblems
}
Decomposes complex problem into manageable sub-problems. Step 2: InitialAnalysis (Reasoning)
reason InitialAnalysis {
  given: problem
  about: "problem structure and requirements"
  ask: "What are the key elements and constraints?"
  depth: 3
  show_work: true
  chain_of_thought: true
  output: ProblemAnalysis
}
Uses explicit reasoning:
  • Depth 3: Moderate reasoning depth
  • Show work: Display reasoning steps
  • Chain-of-thought: Explicit reasoning trace
Validation
validate InitialAnalysis against AnalysisSchema {
  if confidence < 0.85 -> refine(max_attempts: 3)
  if incomplete -> warn "Analysis may be incomplete"
}
Ensures analysis quality. Step 3: StepByStepSolution (Deep Reasoning)
reason StepByStepSolution {
  given: [SubProblems, ProblemAnalysis]
  about: "solving each sub-problem"
  ask: "Solve each sub-problem step by step, showing all work"
  depth: 5
  show_work: true
  chain_of_thought: true
  output: SolutionSteps
}
Deep reasoning (depth 5) to solve sub-problems. Step 4: CheckConsistency
step CheckConsistency {
  given: SolutionSteps
  ask: "Identify any logical contradictions or inconsistencies"
  output: ConsistencyCheck
}
Validates logical consistency. Validation
validate ConsistencyCheck against LogicSchema {
  if contradictions_found -> refine(max_attempts: 2)
  if confidence < 0.80 -> refine(max_attempts: 1)
}
Retries if contradictions found. Step 5: FinalSynthesis (Reasoning)
reason FinalSynthesis {
  given: [SolutionSteps, ConsistencyCheck]
  about: "combining sub-solutions"
  ask: "Synthesize sub-solutions into final conclusion"
  depth: 4
  show_work: true
  output: FinalConclusion
}
Combines sub-solutions into final answer. Step 6: Synthesis and Memory
weave [
  ProblemAnalysis,
  SolutionSteps,
  ConsistencyCheck,
  FinalConclusion
] into ReasoningChain {
  format: StructuredReport
  priority: [reasoning_trace, final_conclusion, contradictions]
}

remember(ReasoningChain) -> ReasoningHistory
Combines all reasoning and stores in memory.

Usage

Run Reasoning Flow

# Validate
axon check multi_step_reasoning.axon

# Compile
axon compile multi_step_reasoning.axon

# Execute with tracing
axon run multi_step_reasoning.axon --backend anthropic --trace

Example Input

"A company has three warehouses (A, B, C) and needs to supply four stores 
(1, 2, 3, 4). Warehouse A can supply 100 units, B can supply 150 units, 
and C can supply 120 units. Store 1 needs 80 units, Store 2 needs 90 units, 
Store 3 needs 70 units, and Store 4 needs 130 units. What is the optimal 
distribution plan to minimize total shipping cost if costs are: 
A→1=$2, A→2=$3, A→3=$4, A→4=$5, B→1=$3, B→2=$2, B→3=$3, B→4=$4, 
C→1=$4, C→2=$3, C→3=$2, C→4=$3?"

Example Output

{
  "type": "ReasoningChain",
  "problem": "Optimize warehouse-to-store distribution to minimize shipping cost",
  "steps": [
    {
      "step_number": 1,
      "premise": "Total supply (370 units) equals total demand (370 units)",
      "conclusion": "Problem is balanced; solution exists",
      "justification": "Sum of warehouse capacities = Sum of store demands",
      "confidence": 0.95
    },
    {
      "step_number": 2,
      "premise": "Lowest cost routes: B→2 ($2), C→3 ($2), A→1 ($2)",
      "conclusion": "Prioritize these routes in allocation",
      "justification": "Greedy allocation to lowest-cost routes first",
      "confidence": 0.92
    },
    {
      "step_number": 3,
      "premise": "Allocate: B→2 (90), C→3 (70), A→1 (80)",
      "conclusion": "Remaining: B=60, C=50, need 130 for Store 4",
      "justification": "After allocations, B has 60 left, C has 50 left, A exhausted",
      "confidence": 0.94
    },
    {
      "step_number": 4,
      "premise": "Store 4 needs 130; available: B=60, C=50",
      "conclusion": "Allocate B→4 (60 @ $4) and C→4 (50 @ $3) and need 20 more",
      "justification": "60 + 50 = 110, short by 20 units",
      "confidence": 0.88
    },
    {
      "step_number": 5,
      "premise": "Previous allocation error: recalculate",
      "conclusion": "Need to use A→4 for remaining 20 units @ $5",
      "justification": "A can supply remaining after allocating 80 to Store 1",
      "confidence": 0.91
    }
  ],
  "final_conclusion": "Optimal allocation: A→1 (80@$2), A→4 (20@$5), B→2 (90@$2), B→4 (60@$4), C→3 (70@$2), C→4 (50@$3). Total cost: $930",
  "contradictions": [],
  "overall_confidence": 0.90,
  "reasoning_trace": [
    "Check supply-demand balance",
    "Identify lowest-cost routes",
    "Allocate greedily to low-cost routes",
    "Calculate remaining capacity and demand",
    "Adjust allocation to satisfy all constraints",
    "Verify no contradictions",
    "Calculate total cost"
  ]
}

Advanced Patterns

Mathematical Proof

flow ProveTheorem(theorem: String) -> Proof {
  reason Analyze {
    given: theorem
    about: "theorem structure"
    ask: "What proof strategy is appropriate?"
    depth: 3
    output: Strategy
  }
  
  reason ConstructProof {
    given: [theorem, Strategy]
    about: "step-by-step proof"
    ask: "Prove the theorem using the chosen strategy"
    depth: 7
    show_work: true
    chain_of_thought: true
    output: ProofSteps
  }
  
  step VerifyProof {
    given: ProofSteps
    ask: "Verify each step follows logically"
    output: Verification
  }
  
  validate Verification against LogicSchema {
    if invalid_step -> refine(max_attempts: 3)
    if confidence < 0.90 -> refine(max_attempts: 2)
  }
}

Recursive Problem Solving

flow SolveRecursive(problem: String, depth: Integer) -> Solution {
  if depth > 5 -> step BaseCase {
    given: problem
    ask: "Solve directly without further decomposition"
    output: Solution
  }
  else -> step RecursiveCase {
    given: problem
    ask: "Break into sub-problems and solve each"
    output: SubSolutions
  }
  
  reason Combine {
    given: SubSolutions
    about: "combining sub-solutions"
    ask: "How do sub-solutions combine to solve original problem?"
    depth: depth + 1
    output: Solution
  }
}

Debate-Style Reasoning

flow DebateReasoning(question: String) -> Conclusion {
  reason ArgumentFor {
    given: question
    about: "arguments supporting positive answer"
    ask: "What are the strongest arguments for?"
    depth: 4
    show_work: true
    output: ProArguments
  }
  
  reason ArgumentAgainst {
    given: question
    about: "arguments supporting negative answer"
    ask: "What are the strongest arguments against?"
    depth: 4
    show_work: true
    output: ConArguments
  }
  
  reason Synthesis {
    given: [ProArguments, ConArguments]
    about: "weighing competing arguments"
    ask: "Which arguments are stronger and why?"
    depth: 5
    chain_of_thought: true
    output: Conclusion
  }
}

Causal Reasoning

flow IdentifyCauses(effect: String) -> CausalChain {
  reason ImmediateCauses {
    given: effect
    about: "direct causes"
    ask: "What directly caused this effect?"
    depth: 3
    output: ImmediateCauses
  }
  
  reason RootCauses {
    given: ImmediateCauses
    about: "underlying root causes"
    ask: "What are the root causes behind the immediate causes?"
    depth: 5
    chain_of_thought: true
    output: RootCauses
  }
  
  step BuildCausalChain {
    given: [ImmediateCauses, RootCauses]
    ask: "Build complete causal chain from root to effect"
    output: CausalChain
  }
}

Best Practices

1. Use Appropriate Reasoning Depth

// Simple inference: depth 1-2
reason Quick {
  depth: 2
}

// Complex reasoning: depth 5-7
reason Deep {
  depth: 6
  show_work: true
}

2. Always Show Work for Complex Reasoning

reason Complex {
  show_work: true          // Critical for debugging
  chain_of_thought: true   // Explicit reasoning trace
}

3. Validate Logical Consistency

step CheckLogic {
  ask: "Identify contradictions"
}

validate CheckLogic against LogicSchema {
  if contradictions_found -> refine(max_attempts: 2)
}

4. Use Session Memory for Multi-Turn Reasoning

context ReasoningMode {
  memory: session
}

remember(ReasoningChain) -> ReasoningHistory

5. Apply High Confidence Thresholds

persona LogicalReasoner {
  confidence_threshold: 0.85  // High for sound reasoning
}

anchor LogicalConsistency {
  confidence_floor: 0.80
}

6. Use Moderate Temperature

context ReasoningMode {
  temperature: 0.4  // Balance consistency and creativity
}

Contract Analyzer

Legal contract analysis with risk reasoning

Sentiment Analysis

Emotional reasoning and classification

Data Extraction

Extract structured data with validation
  • Flow — Reason operation and chain-of-thought
  • Anchor — Logical consistency constraints
  • Memory — Store reasoning chains
  • Persona — Define reasoning agents

Build docs developers (and LLMs) love