Skip to main content
The Agent class is the foundation for building AI agents in Agno. It handles interactions with language models, manages tools, maintains conversation history, and provides features like memory, knowledge integration, and reasoning.

Constructor

from agno import Agent

agent = Agent(
    model="gpt-4o",
    name="my_agent",
    instructions=["You are a helpful assistant"],
    tools=[...],
    db=db
)

Core Parameters

model
Model | str
default:"None"
The language model to use for this agent. Can be a Model instance or model identifier string.
name
str
default:"None"
A descriptive name for the agent.
id
str
default:"None"
Unique identifier for the agent. Auto-generated if not provided.
description
str
default:"None"
A description of the agent added to the system message.
instructions
str | List[str] | Callable
default:"None"
Instructions that guide the agent’s behavior. Can be a string, list of strings, or callable that returns instructions.
user_id
str
default:"None"
Default user ID for this agent.

Session & State

session_id
str
default:"None"
Default session ID (auto-generated if not provided).
session_state
Dict[str, Any]
default:"None"
Session state stored in the database to persist across runs.
add_session_state_to_context
bool
default:"False"
If True, adds the session_state to the agent’s context.
enable_agentic_state
bool
default:"False"
If True, gives the agent tools to update session_state dynamically.
overwrite_db_session_state
bool
default:"False"
If True, overwrites stored session_state with the provided session_state.
cache_session
bool
default:"False"
If True, caches the agent session in memory for faster access.

Database

db
BaseDb | AsyncBaseDb
default:"None"
Database to use for storing agent sessions, memory, and history.

Memory

memory_manager
MemoryManager
default:"None"
Memory manager for handling user memories.
enable_agentic_memory
bool
default:"False"
Enables the agent to manage memories of the user.
update_memory_on_run
bool
default:"False"
If True, the agent creates/updates user memories at the end of runs.
add_memories_to_context
bool
default:"None"
If True, adds a reference to user memories in the context.

History

add_history_to_context
bool
default:"False"
Adds messages from chat history to the context sent to the model.
num_history_runs
int
default:"3"
Number of historical runs to include in the messages.
num_history_messages
int
default:"None"
Number of historical messages to include in the context.
max_tool_calls_from_history
int
default:"None"
Maximum number of tool calls to include from history (None = no limit).

Knowledge & RAG

knowledge
KnowledgeProtocol | Callable
default:"None"
Knowledge base for the agent to retrieve information from.
knowledge_filters
Dict[str, Any] | List[FilterExpr]
default:"None"
Filters to apply when searching the knowledge base.
add_knowledge_to_context
bool
default:"False"
If True, adds knowledge references to the user prompt.
search_knowledge
bool
default:"True"
Adds a tool that allows the agent to search the knowledge base.
knowledge_retriever
Callable
default:"None"
Custom retrieval function. If provided, used instead of default search.

Tools

tools
List[Toolkit | Callable | Function | Dict] | Callable
default:"None"
Tools available to the agent. Can be a list or callable factory that returns tools.
tool_call_limit
int
default:"None"
Maximum number of tool calls allowed.
tool_choice
str | Dict[str, Any]
default:"None"
Controls which tool is called: “none”, “auto”, or specific tool specification.
tool_hooks
List[Callable]
default:"None"
Functions called around tool calls as middleware.

Hooks

pre_hooks
List[Callable | BaseGuardrail | BaseEval]
default:"None"
Functions called after session load, before processing starts.
post_hooks
List[Callable | BaseGuardrail | BaseEval]
default:"None"
Functions called after output generation, before response return.

Reasoning

reasoning
bool
default:"False"
Enables step-by-step reasoning mode.
reasoning_model
Model
default:"None"
Separate model to use for reasoning steps.
reasoning_min_steps
int
default:"1"
Minimum number of reasoning steps.
reasoning_max_steps
int
default:"10"
Maximum number of reasoning steps.

Response Configuration

output_schema
Type[BaseModel] | Dict[str, Any]
default:"None"
Pydantic model or JSON schema for structured output.
structured_outputs
bool
default:"None"
Use model-enforced structured outputs if supported.
markdown
bool
default:"False"
If True, instructs the agent to format output using markdown.
stream
bool
default:"None"
Stream the response from the agent.
stream_events
bool
default:"None"
Stream intermediate steps from the agent.

Methods

run()

Run the agent with a message.
response = agent.run("What's the weather like?")
print(response.content)
Parameters:
  • input (str | List | Dict | Message | BaseModel | List[Message]): The input message(s)
  • stream (bool): Whether to stream the response
  • session_id (str): Optional session ID
  • user_id (str): Optional user ID
  • session_state (Dict[str, Any]): Optional session state
  • images (Sequence[Image]): Optional images to include
  • audio (Sequence[Audio]): Optional audio files
  • videos (Sequence[Video]): Optional videos
  • files (Sequence[File]): Optional files
Returns: RunOutput or Iterator of RunOutputEvent if streaming

arun()

Async version of run().
response = await agent.arun("What's the weather like?")
Run the agent and print the response to console.
agent.print_response("Tell me a joke")

cli_app()

Start an interactive CLI chat with the agent.
agent.cli_app()

get_session()

Retrieve a session from the database.
session = agent.get_session(session_id="abc123")

save()

Save the agent configuration to the database.
version = agent.save(db=db, stage="published")

load()

Load an agent from the database.
agent = Agent.load(id="my_agent", db=db)

Example Usage

from agno import Agent
from agno.models.openai import OpenAIChat

agent = Agent(
    model=OpenAIChat(id="gpt-4o"),
    name="Assistant",
    instructions="You are a helpful assistant.",
)

response = agent.run("What is 2+2?")
print(response.content)

Build docs developers (and LLMs) love