Skip to main content
AgentOS is a production-ready FastAPI server that exposes your agents, teams, and workflows as REST APIs. It provides built-in endpoints for running AI components, managing sessions, and handling streaming responses.

Quick Start

from agno import Agent
from agno.api.os import AgentOS

app = AgentOS(
    agents=[Agent(name="assistant", model="gpt-4o")]
)

app.serve()

Constructor

agents
List[Agent]
default:"[]"
List of agents to expose via the API.
teams
List[Team]
default:"[]"
List of teams to expose via the API.
workflows
List[Workflow]
default:"[]"
List of workflows to expose via the API.
title
str
default:"AgentOS"
API title shown in documentation.
description
str
default:"None"
API description shown in documentation.
version
str
default:"1.0.0"
API version.
host
str
default:"0.0.0.0"
Host to bind the server to.
port
int
default:"7777"
Port to run the server on.
reload
bool
default:"False"
Enable auto-reload for development.
log_level
str
default:"info"
Logging level: “debug”, “info”, “warning”, “error”.

Methods

serve()

Start the API server.
app.serve()
Parameters:
  • host (str): Override default host
  • port (int): Override default port
  • reload (bool): Enable auto-reload

add_agent()

Add an agent to the API after initialization.
app.add_agent(Agent(name="helper"))

add_team()

Add a team to the API.
app.add_team(team)

add_workflow()

Add a workflow to the API.
app.add_workflow(workflow)

API Endpoints

When you run AgentOS, the following endpoints are automatically created:

Agent Endpoints

POST /agents//run Run an agent.
curl -X POST http://localhost:7777/agents/assistant/run \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!"}'
POST /agents//stream Run an agent with streaming. GET /agents//sessions List sessions for an agent. GET /agents//sessions/ Get a specific session.

Team Endpoints

POST /teams//run Run a team. POST /teams//stream Run a team with streaming.

Workflow Endpoints

POST /workflows//run Run a workflow. POST /workflows//stream Run a workflow with streaming.

Example Usage

from agno import Agent
from agno.api.os import AgentOS
from agno.models.openai import OpenAIChat

assistant = Agent(
    name="assistant",
    model=OpenAIChat(id="gpt-4o"),
    instructions="You are a helpful assistant."
)

app = AgentOS(
    agents=[assistant],
    title="My AI API",
    version="1.0.0"
)

app.serve()

Deployment

Docker

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "app.py"]

Production Settings

from agno.api.os import AgentOS

app = AgentOS(
    agents=[...],
    host="0.0.0.0",
    port=8000,
    reload=False,
    log_level="info"
)

# Run with Gunicorn
# gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker

Build docs developers (and LLMs) love