Agent is the core class in the Logicore framework. Every other agent type (SmartAgent, CopilotAgent, MCPAgent) inherits from it. Use Agent directly when you want complete control: bring your own Python functions as tools, define approval policies, manage multiple conversation sessions, and stream tokens to your UI.
Constructor
Provider & model parameters
Provider & model parameters
LLM provider. Pass a string shorthand (
"ollama", "openai", "gemini", "groq", "azure") or a pre-constructed LLMProvider instance. This is the single most important parameter — it determines which backend receives every request.Provider-specific model name. When omitted, each provider falls back to a default:
ollama→"gpt-oss:20b-cloud"openai→"gpt-4"groq→"llama-3.3-70b-versatile"gemini→"gemini-pro"
API key for cloud providers (
openai, groq, gemini, azure). Required for those providers; omit for local providers like Ollama.Custom endpoint URL. Required for
azure (Azure OpenAI endpoint). Can override the default base URL for self-hosted models.Behavior parameters
Behavior parameters
Custom system prompt. When omitted, an appropriate prompt is auto-generated from
role. Use this to define the agent’s persona, constraints, and output format.Role hint used to select a built-in system prompt when
system_message is not provided. Common values: "general", "copilot". Only used when system_message is None.Maximum number of LLM-tool loop iterations per
chat() call. Protects against infinite tool loops. Returns "Max iterations reached." when the limit is hit.Print verbose logs to stdout — iteration count, tool calls, streaming status, memory events. Use in development only.
Tools & skills parameters
Tools & skills parameters
Initial tool set. Accepts:
- A list of Python callables — auto-registered with schema inferred from type hints and docstrings.
- A list of raw JSON schema dicts — added as-is.
True— loads all built-in Logicore tools (filesystem, web, bash, etc.).
Skill names (strings) or
Skill objects to load at initialization. Skills bundle tool schemas, executors, and system-prompt instructions into a reusable package.Filesystem root used by file and bash tools. Constrains tool execution to this directory. Important for security when exposing filesystem tools.
Memory & observability parameters
Memory & observability parameters
Enable persistent memory via
AgentrySimpleMem. When True, the agent indexes messages and allows on-demand RAG retrieval via the memory tool. Memory is scoped by role and session_id.Summarize older messages when the context window grows long. Reduces token cost on extended conversations. Uses the same provider to generate the summary.
Track per-session token usage, tool call counts, latency, and provider info. Access via the
agent.telemetry property.Manual override for model capability detection (
supports_tools, supports_vision). When None, capabilities are detected automatically on the first chat() call. Pass a dict or ModelCapabilities object to skip detection.chat()
The primary entry point. Runs the full agent loop and returns the final assistant message.
The user’s message. Pass a
str for text, or a list of content blocks for multimodal input (e.g., text + image URLs).Identifies the conversation thread. Use a unique value per user or per logical thread to keep histories isolated. The session is created automatically on first use.
Per-call callback overrides. Merged with any callbacks set via
set_callbacks(). Supported keys:"on_token"— called for each streamed token"on_tool_start"— called before each tool execution"on_tool_end"— called after each tool execution"on_tool_approval"— approval gate for tool execution"on_final_message"— called when the final answer is ready
Enable token streaming. Requires
"on_token" in callbacks (or streaming_funct) to receive tokens progressively.Shorthand for setting
callbacks["on_token"] and enabling streaming in one argument. Equivalent to passing stream=True, callbacks={"on_token": fn}.Append an LLM-generated execution summary to the response. Useful for debugging, demos, or audit records.
Forwarded to the provider (e.g.,
temperature=0.2, max_tokens=800).str — the final assistant message after all tool iterations complete.
Intermediate tool calls are invisible to the caller unless surfaced via
callbacks. The return value is always the final synthesized answer.Tool Management Methods
register_tool_from_function(func)
Convert any Python callable into a tool and register it. Schema is inferred automatically from type hints and Google-style or Sphinx-style docstrings.
add_custom_tool(schema, executor)
Register a tool directly from a raw JSON schema and an executor callable. Use when you need full control over the schema structure.
load_skill(skill) / load_skills(skills)
Add pre-built skill packages. A Skill bundles tool schemas, executors, and system-prompt instructions.
Session Management Methods
get_session(session_id)
Return the AgentSession for a given ID, creating it if it does not exist.
clear_session(session_id)
Erase the message history for a session while keeping the system message.
Multi-session example
Usesession_id to handle multiple users from a single agent instance:
Approval Workflow
set_auto_approve_all(enabled)
Bypass all approval checks. Every tool call runs without a callback.
set_callbacks(**kwargs)
Register persistent callbacks that apply to every chat() call on this agent instance. Per-call callbacks dicts are merged on top of these at runtime.
| Key | Signature | When called |
|---|---|---|
on_token | (token: str) -> None | Each streaming token |
on_tool_start | (session_id, tool_name, args) -> None | Before tool execution |
on_tool_end | (session_id, tool_name, result) -> None | After tool execution |
on_tool_approval | async (session_id, tool_name, args) -> bool | dict | Approval gate — return True/False or a modified args dict |
on_final_message | (session_id, content) -> None | When the final answer is ready |
The
on_tool_approval callback can return a modified args dict instead of a boolean. When a dict is returned, the agent treats it as approval and uses the modified arguments for execution — useful for sanitizing inputs.Usage Examples
Basic tool use
Streaming with real-time output
Approval callback (production pattern)
Multi-session management
Persistent memory across sessions
Memory context is not injected automatically at the start of every chat (to prevent context pollution). Instead, the agent can call the built-in
memory tool on demand when it needs past facts.Walkthrough / audit output
Properties
| Property | Type | Description |
|---|---|---|
agent.system_prompt | str | Currently active system prompt |
agent.telemetry | dict | Token usage, latency, tool call counts (requires telemetry=True) |
Execution Summary Methods
Supported Providers
| String shorthand | Provider class | Notes |
|---|---|---|
"ollama" | OllamaProvider | Local inference; no API key needed |
"openai" | OpenAIProvider | Requires api_key |
"groq" | GroqProvider | Requires api_key; fast inference |
"gemini" | GeminiProvider | Requires api_key |
"azure" | AzureProvider | Requires api_key and endpoint |