Skip to main content
BasicAgent is a thin, opinionated wrapper around the core Agent class. It auto-generates a system prompt from a name and description, converts plain Python functions into tools, and exposes a minimal API surface so you can focus on your application logic rather than framework plumbing.
from logicore.agents.agent_basic import BasicAgent

When to Use BasicAgent

Use BasicAgent when...

  • You want a working agent in under 5 minutes
  • Your tools are plain Python functions
  • You don’t need custom approval workflows
  • You’re prototyping or learning the framework

Upgrade to Agent when...

  • You need full control over the system prompt
  • You need custom approval callbacks per tool
  • You’re building a multi-tenant production service
  • You need MCP server integration

Constructor

BasicAgent(
    name: str = "Assistant",
    description: str = "A helpful AI assistant",
    provider: str = "ollama",
    model: str = None,
    api_key: str = None,
    tools: list = None,
    system_prompt: str = None,
    memory_enabled: bool = True,
    debug: bool = False,
    telemetry: bool = False,
    max_iterations: int = 20,
    skills: list = None,
    workspace_root: str = None,
    **kwargs,
)
name
str
default:"Assistant"
Human-readable name for the agent. Used in the auto-generated system prompt — the LLM will present itself with this name.
description
str
default:"A helpful AI assistant"
One-sentence description of what this agent does. Injected into the system prompt to define the agent’s purpose and scope.
provider
str
default:"ollama"
LLM provider shorthand. Accepted values: "ollama", "openai", "groq", "gemini", "azure". This is the most important parameter — wrong value causes initialization failure.
model
str
default:"None"
Provider-specific model name. Omit to use the provider’s default. Always specify in production.
api_key
str
default:"None"
API key. Required for cloud providers (openai, groq, gemini, azure). Not needed for ollama.
tools
list
default:"None"
List of tools. Accepts Python callables or BaseTool instances. Callables are auto-converted to tool schemas using type hints and docstrings. Omit for a plain chat agent with no tools.
system_prompt
str
default:"None"
Custom system prompt. When provided, overrides the auto-generated prompt entirely. Use when you need a specific persona or strict formatting rules.
memory_enabled
bool
default:"True"
Enable session memory. When True, the underlying Agent is initialized with memory=True, enabling persistent fact storage via AgentrySimpleMem.
debug
bool
default:"False"
Print verbose execution logs. Use in development only.
max_iterations
int
default:"20"
Maximum tool-call iterations per chat() call. Lower than the base Agent default (40) to keep BasicAgent responses snappy.
skills
list
default:"None"
Skill names or Skill objects to load at startup.
workspace_root
str
default:"None"
Root directory for filesystem-bound tools.

chat()

await agent.chat(
    message: str | list,
    session_id: str = "default",
    stream: bool = False,
    generate_walkthrough: bool = False,
    **kwargs,
) -> str
message
str or list
required
User message. Pass str for plain text or a list of content blocks for multimodal input.
session_id
str
default:"default"
Conversation thread identifier. Same ID preserves context across turns. Use per-user IDs in multi-user apps.
stream
bool
default:"False"
Enable token streaming. Requires an on_token callback registered via set_callbacks().
generate_walkthrough
bool
default:"False"
Append an LLM-generated execution summary to the response.
**kwargs
Any
Provider-specific overrides forwarded to the underlying Agent.chat() (e.g., temperature, max_tokens).
Returns: str — the final assistant message.

chat_sync()

Synchronous wrapper for environments without an active event loop.
response: str = agent.chat_sync(
    message: str,
    session_id: str = "default",
    generate_walkthrough: bool = False,
) -> str
# Use in scripts and notebooks without asyncio.run()
response = agent.chat_sync("What is the boiling point of water?")
print(response)

The @tool Decorator

The tool decorator provides a clean way to mark functions as tools and set their description:
from logicore.agents.agent_basic import tool, BasicAgent

@tool("Calculate a math expression safely")
def calculator(expression: str) -> str:
    import ast
    return str(ast.literal_eval(expression))

@tool("Convert temperature between Celsius and Fahrenheit")
def convert_temp(value: float, from_unit: str, to_unit: str) -> str:
    if from_unit == "C" and to_unit == "F":
        return f"{value * 9/5 + 32:.1f}°F"
    elif from_unit == "F" and to_unit == "C":
        return f"{(value - 32) * 5/9:.1f}°C"
    return "Unsupported conversion"

agent = BasicAgent(
    name="MathBot",
    description="A math and unit conversion assistant.",
    tools=[calculator, convert_temp],
    provider="ollama",
)
The string passed to @tool() becomes the function’s __doc__, which is used as the tool description in the schema. Write it as a clear imperative sentence — the LLM reads this to decide when to call the tool.

create_agent() Factory Function

A one-liner shorthand for creating a BasicAgent:
from logicore.agents.agent_basic import create_agent

agent = create_agent(
    name: str = "Assistant",
    description: str = "A helpful AI assistant",
    tools: list = None,
    provider: str = "ollama",
    model: str = None,
    api_key: str = None,
    **kwargs,
) -> BasicAgent
agent = create_agent(
    name="WeatherBot",
    description="Answers weather questions for any city.",
    tools=[get_current_weather, get_forecast],
    provider="openai",
    model="gpt-4o-mini",
    api_key="sk-...",
)
response = await agent.chat("What is the weather in Tokyo?")
create_agent() is identical to calling BasicAgent(...) directly — use whichever reads more naturally in your codebase.

Additional Methods

add_tool(tool) / add_tools(tools)

Add tools after construction without recreating the agent:
def lookup_order(order_id: str) -> dict:
    """Look up an order by ID."""
    return {"order_id": order_id, "status": "shipped"}

agent.add_tool(lookup_order)
agent.add_tools([cancel_order, refund_order])

set_callbacks(...)

Register streaming and lifecycle callbacks:
agent.set_callbacks(
    on_token=lambda token: print(token, end="", flush=True),
    on_tool_start=lambda sid, name, args: print(f"Calling {name}..."),
    on_tool_end=lambda sid, name, result: print(f"{name} done."),
    on_final_message=lambda sid, content: save_response(sid, content),
)

clear_history(session_id)

Erase conversation history for a session:
agent.clear_history("user-42")

get_session(session_id)

Access the raw session object:
session = agent.get_session("user-42")
print(f"{len(session.messages)} messages in session")

load_skill() / load_skills()

agent.load_skills(["web_research"])
print(agent.loaded_skills)  # ["web_research"]

Properties

PropertyTypeDescription
agent.toolslist[str]Names of all registered tools
agent.system_promptstrCurrent system prompt
agent.loaded_skillslist[str]Names of all loaded skills
agent.telemetrydictToken / latency summary (requires telemetry=True)

Examples

from logicore.agents.agent_basic import BasicAgent
import asyncio

async def main():
    agent = BasicAgent(provider="ollama", model="qwen2:7b")
    response = await agent.chat("What is machine learning?")
    print(response)

asyncio.run(main())
This is the minimum viable setup. No tools, no config — just a provider and a question.

Tool Schema Inference

BasicAgent uses register_tool_from_function() (inherited from Agent) to build tool schemas from function signatures. Type hints map to JSON Schema types:
Python typeJSON Schema type
str"string"
int"integer"
float"number"
bool"boolean"
list"array"
dict"object"
Parameters without a default value are marked required. Parameters with defaults are optional.
def search_products(
    query: str,          # required — no default
    limit: int = 10,     # optional — has default
    category: str = "",  # optional — has default
) -> list:
    """Search the product catalog."""
    ...
Always annotate your tool function parameters with type hints. Without them, all parameters default to "string" type in the schema, which may cause incorrect tool calls.

Build docs developers (and LLMs) love