BasicAgent is a thin, opinionated wrapper around the core Agent class. It auto-generates a system prompt from a name and description, converts plain Python functions into tools, and exposes a minimal API surface so you can focus on your application logic rather than framework plumbing.
When to Use BasicAgent
Use BasicAgent when...
- You want a working agent in under 5 minutes
- Your tools are plain Python functions
- You don’t need custom approval workflows
- You’re prototyping or learning the framework
Upgrade to Agent when...
- You need full control over the system prompt
- You need custom approval callbacks per tool
- You’re building a multi-tenant production service
- You need MCP server integration
Constructor
Human-readable name for the agent. Used in the auto-generated system prompt — the LLM will present itself with this name.
One-sentence description of what this agent does. Injected into the system prompt to define the agent’s purpose and scope.
LLM provider shorthand. Accepted values:
"ollama", "openai", "groq", "gemini", "azure". This is the most important parameter — wrong value causes initialization failure.Provider-specific model name. Omit to use the provider’s default. Always specify in production.
API key. Required for cloud providers (
openai, groq, gemini, azure). Not needed for ollama.List of tools. Accepts Python callables or
BaseTool instances. Callables are auto-converted to tool schemas using type hints and docstrings. Omit for a plain chat agent with no tools.Custom system prompt. When provided, overrides the auto-generated prompt entirely. Use when you need a specific persona or strict formatting rules.
Enable session memory. When
True, the underlying Agent is initialized with memory=True, enabling persistent fact storage via AgentrySimpleMem.Print verbose execution logs. Use in development only.
Maximum tool-call iterations per
chat() call. Lower than the base Agent default (40) to keep BasicAgent responses snappy.Skill names or
Skill objects to load at startup.Root directory for filesystem-bound tools.
chat()
User message. Pass
str for plain text or a list of content blocks for multimodal input.Conversation thread identifier. Same ID preserves context across turns. Use per-user IDs in multi-user apps.
Enable token streaming. Requires an
on_token callback registered via set_callbacks().Append an LLM-generated execution summary to the response.
Provider-specific overrides forwarded to the underlying
Agent.chat() (e.g., temperature, max_tokens).str — the final assistant message.
chat_sync()
Synchronous wrapper for environments without an active event loop.
The @tool Decorator
The tool decorator provides a clean way to mark functions as tools and set their description:
create_agent() Factory Function
A one-liner shorthand for creating a BasicAgent:
create_agent() is identical to calling BasicAgent(...) directly — use whichever reads more naturally in your codebase.
Additional Methods
add_tool(tool) / add_tools(tools)
Add tools after construction without recreating the agent:
set_callbacks(...)
Register streaming and lifecycle callbacks:
clear_history(session_id)
Erase conversation history for a session:
get_session(session_id)
Access the raw session object:
load_skill() / load_skills()
Properties
| Property | Type | Description |
|---|---|---|
agent.tools | list[str] | Names of all registered tools |
agent.system_prompt | str | Current system prompt |
agent.loaded_skills | list[str] | Names of all loaded skills |
agent.telemetry | dict | Token / latency summary (requires telemetry=True) |
Examples
- Minimal Q&A
- Multi-turn session
- Streaming
- Agent with tools
Tool Schema Inference
BasicAgent uses register_tool_from_function() (inherited from Agent) to build tool schemas from function signatures. Type hints map to JSON Schema types:
| Python type | JSON Schema type |
|---|---|
str | "string" |
int | "integer" |
float | "number" |
bool | "boolean" |
list | "array" |
dict | "object" |
required. Parameters with defaults are optional.