Skip to main content
Reasoning mode enables agents to work through problems step-by-step, showing their thinking process before arriving at a final answer. This is particularly useful for complex reasoning tasks, math problems, and multi-step planning.

Overview

When reasoning is enabled, the agent:
  1. Breaks down the problem into steps
  2. Works through each step explicitly
  3. Shows its reasoning process
  4. Arrives at a final answer

Enabling Reasoning

from agno import Agent

agent = Agent(
    model="gpt-4o",
    reasoning=True,
    reasoning_min_steps=1,
    reasoning_max_steps=10
)

response = agent.run("If a train travels 60 mph for 2.5 hours, how far does it go?")
print(response.reasoning_content)  # Shows step-by-step thinking
print(response.content)  # Final answer

Parameters

reasoning
bool
default:"False"
Enable step-by-step reasoning mode.
reasoning_model
Model
default:"None"
Separate model to use for reasoning steps. If not provided, uses the main model.
reasoning_agent
Agent
default:"None"
Custom agent to use for reasoning instead of a model.
reasoning_min_steps
int
default:"1"
Minimum number of reasoning steps to perform.
reasoning_max_steps
int
default:"10"
Maximum number of reasoning steps allowed.

Reasoning Content

The response includes two types of content:
  • reasoning_content: The step-by-step thinking process
  • content: The final answer
response = agent.run("Complex problem")

print("Reasoning:")
print(response.reasoning_content)

print("\nFinal Answer:")
print(response.content)

Example Usage

from agno import Agent

agent = Agent(
    model="gpt-4o",
    reasoning=True
)

response = agent.run(
    "A store has 150 apples. They sell 60% on Monday and 25% of the "
    "remainder on Tuesday. How many apples are left?"
)

print(response.reasoning_content)
# Step 1: Calculate Monday sales
# 60% of 150 = 0.6 × 150 = 90 apples
# Remaining: 150 - 90 = 60 apples
#
# Step 2: Calculate Tuesday sales
# 25% of 60 = 0.25 × 60 = 15 apples
# Remaining: 60 - 15 = 45 apples

print(response.content)
# There are 45 apples left.

Reasoning Agent

For more control, use a custom reasoning agent:
from agno import Agent

# Create specialized reasoning agent
reasoning_agent = Agent(
    model="gpt-4o",
    instructions=[
        "Break down problems step by step",
        "Show your work for each step",
        "Verify your logic before proceeding"
    ]
)

# Main agent uses reasoning agent
agent = Agent(
    model="gpt-4o",
    reasoning=True,
    reasoning_agent=reasoning_agent
)

Use Cases

Mathematics

agent = Agent(
    model="gpt-4o",
    reasoning=True,
    tools=[calculator]
)

response = agent.run(
    "Solve the quadratic equation: 2x^2 + 5x - 3 = 0"
)

Code Planning

agent = Agent(
    model="gpt-4o",
    reasoning=True,
    instructions=["Plan before implementing"]
)

response = agent.run(
    "Design and implement a binary search tree with insert and delete operations"
)

Strategic Planning

agent = Agent(
    model="gpt-4o",
    reasoning=True,
    reasoning_max_steps=20
)

response = agent.run(
    "Create a go-to-market strategy for a new SaaS product targeting small businesses"
)

Problem Debugging

agent = Agent(
    model="gpt-4o",
    reasoning=True,
    tools=[run_code, read_logs]
)

response = agent.run(
    "Debug why the API is returning 500 errors intermittently"
)

Best Practices

  1. Complex problems: Use reasoning for multi-step or complex problems
  2. Step limits: Set appropriate min/max steps for your use case
  3. Show reasoning: Display reasoning to users for transparency
  4. Combine with tools: Use tools alongside reasoning for best results
  5. Specialized models: Consider o1/o3 models for advanced reasoning
  6. Verification: The agent can verify its own reasoning at each step
  7. Cost awareness: Reasoning uses more tokens, monitor costs

Models with Native Reasoning

Some models have built-in reasoning capabilities:
  • OpenAI o1: Optimized for complex reasoning
  • OpenAI o3: Advanced reasoning model
  • Claude 3.5 Sonnet: Extended thinking mode
from agno.models.openai import OpenAIReasoning

agent = Agent(
    model=OpenAIReasoning(id="o1-preview"),
    # Reasoning is implicit with these models
)

Limitations

  • Increases token usage and latency
  • May not help for simple questions
  • Quality depends on model capabilities
  • Not all models support reasoning equally well

Build docs developers (and LLMs) love