Skip to main content
This guide will take you from zero to a running agentic system with memory, tools, and a production API in under 5 minutes.

What You’ll Build

A stateful, tool-using agent that:
  • Connects to the Agno documentation via MCP
  • Remembers conversation context across sessions
  • Streams responses in real time
  • Runs as a production API
  • Integrates with the AgentOS UI
All in ~20 lines of code.

Prerequisites

Before you begin, make sure you have:

Step 1: Create Your Agent

Create a new file called agno_assist.py:
agno_assist.py
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.os import AgentOS
from agno.tools.mcp import MCPTools

agno_assist = Agent(
    name="Agno Assist",
    model=Claude(id="claude-sonnet-4-6"),
    db=SqliteDb(db_file="agno.db"),
    tools=[MCPTools(url="https://docs.agno.com/mcp")],
    add_history_to_context=True,
    num_history_runs=3,
    markdown=True,
)

agent_os = AgentOS(agents=[agno_assist], tracing=True)
app = agent_os.get_app()
This example uses SQLite for simplicity. For production, use PostgreSQL. See the Installation guide for database setup.

Step 2: Set Your API Key

Export your Anthropic API key:
export ANTHROPIC_API_KEY="your-api-key-here"

Step 3: Run Your Agent

Start the AgentOS server using uvx:
uvx --python 3.12 \
  --with "agno[os]" \
  --with anthropic \
  --with mcp \
  fastapi dev agno_assist.py
Your agent is now running at http://localhost:8000.
  • uvx runs the command with the specified dependencies without requiring a virtual environment
  • --with "agno[os]" installs Agno with AgentOS dependencies (FastAPI, uvicorn, SQLAlchemy)
  • --with anthropic installs the Anthropic SDK
  • --with mcp installs MCP (Model Context Protocol) support
  • fastapi dev starts a development server with auto-reload

Step 4: Connect to AgentOS UI

Now let’s connect your agent to the AgentOS web interface:
1

Open AgentOS

Visit os.agno.com and sign in
2

Add your endpoint

Click “Add new OS” in the top navigation
3

Configure connection

  • Select “Local” to connect to a local AgentOS
  • Enter your endpoint URL: http://localhost:8000
  • Name it “Local AgentOS”
  • Click “Connect”
4

Start chatting

Open Chat, select your agent, and ask:
What is Agno?
The agent retrieves context from the Agno MCP server and responds with grounded answers.

What You Just Built

In ~20 lines, you created:

Stateful Agent

Remembers the last 3 conversation turns across sessions

Tool Integration

Can query Agno documentation via MCP

Production API

REST endpoints at http://localhost:8000

Real-time Streaming

Streams reasoning and responses as they’re generated

Per-user Isolation

Each user gets their own session and memory

Native Tracing

Full observability of every agent interaction

Try It Programmatically

You can also call your agent directly in Python:
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.anthropic import Claude
from agno.tools.mcp import MCPTools

agent = Agent(
    name="Agno Assist",
    model=Claude(id="claude-sonnet-4-6"),
    db=SqliteDb(db_file="agno.db"),
    tools=[MCPTools(url="https://docs.agno.com/mcp")],
    add_history_to_context=True,
    num_history_runs=3,
)

# Print response to console
agent.print_response("What is Agno?", stream=True)

# Or get the response as a string
response = agent.run("What are the core concepts in Agno?")
print(response.content)

Understanding the Code

Let’s break down each component:

Agent Configuration

agno_assist = Agent(
    name="Agno Assist",
    model=Claude(id="claude-sonnet-4-6"),
    db=SqliteDb(db_file="agno.db"),
    tools=[MCPTools(url="https://docs.agno.com/mcp")],
    add_history_to_context=True,
    num_history_runs=3,
    markdown=True,
)
  • name - Identifies the agent in logs and UI
  • model - The LLM to use (Claude Sonnet 4.6 in this case)
  • db - Where to store conversation history and state
  • tools - Functions the agent can call (MCP integration here)
  • add_history_to_context=True - Include previous turns in context
  • num_history_runs=3 - Remember last 3 conversation turns
  • markdown=True - Format responses as markdown

AgentOS Setup

agent_os = AgentOS(agents=[agno_assist], tracing=True)
app = agent_os.get_app()
  • agents - List of agents to expose via API
  • tracing=True - Enable OpenTelemetry tracing
  • get_app() - Returns a FastAPI application

Next Steps

Now that you have a working agent, explore more advanced features:

Core Concepts

Learn about Agents, Teams, Workflows, and more

Add Memory

Give your agent long-term memory that persists

Add Knowledge

Connect your agent to vector databases and RAG

Production Setup

Deploy to production with PostgreSQL and monitoring

Common Issues

If port 8000 is already taken, specify a different port:
uvx --python 3.12 \
  --with "agno[os]" \
  --with anthropic \
  --with mcp \
  fastapi dev agno_assist.py --port 8080
Make sure you’ve exported your API key in the same terminal session:
export ANTHROPIC_API_KEY="your-api-key"
echo $ANTHROPIC_API_KEY  # Verify it's set
Alternatively, create a .env file:
.env
ANTHROPIC_API_KEY=your-api-key-here
Make sure:
  1. Your server is running at http://localhost:8000
  2. You’re using the correct URL in the AgentOS connection dialog
  3. There are no firewall or network restrictions

You Can Use This Exact Same Architecture for Production

This isn’t just a demo. This is the same architecture used in production multi-agent systems:
  • Swap SQLite for PostgreSQL
  • Add more agents and teams
  • Deploy with Docker or Kubernetes
  • Add authentication and authorization
  • Scale horizontally
The cookbook has examples of all these patterns.

Build docs developers (and LLMs) love