Overview
When you enablereasoning=True on an agent, Agno automatically detects whether the model has native reasoning support. If not, it falls back to Chain-of-Thought reasoning.
How It Works
Chain-of-Thought reasoning follows this process:Reasoning Step Actions
Each step specifies the next action:Configuration
Step Limits
Control how many reasoning steps are allowed:Custom Reasoning Agent
Provide your own reasoning agent for more control:Separate Reasoning Model
Use a different model for reasoning:Example: Math Problem
Reasoning with Tools
Chain-of-Thought can use tools during reasoning:Accessing Reasoning Steps
Get reasoning steps programmatically:Streaming Reasoning Steps
Stream reasoning in real-time:Reasoning Schema
The reasoning output follows this structure:Best Practices
Set Appropriate Limits
Use
reasoning_min_steps to ensure thorough analysis, and reasoning_max_steps to prevent runaway reasoningValidate Results
Include validation steps to catch errors in reasoning
Use Tools
Provide tools for calculation, search, and validation during reasoning
Custom Instructions
Customize the reasoning agent’s instructions for domain-specific reasoning
Comparison: Native vs Chain-of-Thought
| Feature | Native Reasoning | Chain-of-Thought |
|---|---|---|
| Model Support | Specific models only | Any model |
| Performance | Optimized by provider | Custom control |
| Cost | Provider pricing | Separate reasoning calls |
| Customization | Limited | Full control |
| Tool Usage | Varies by model | Fully supported |
Next Steps
Reasoning Overview
Learn about native reasoning models
Evaluations
Measure reasoning quality and accuracy
Learning
Combine reasoning with memory
Guardrails
Add safety checks to reasoning