Skip to main content

What Is an Agent?

An Agent is an autonomous AI worker that understands natural language tasks, decides which tools to use, maintains context across multi-turn conversations, and executes actions safely with configurable approval workflows. All agents share a common loop:
1

Receive

Accept user input — plain text or multimodal (text + images).
2

Reason

Query the LLM with the assembled context: session history, system prompt, and tool schemas.
3

Decide

The LLM either responds directly or requests one or more tool calls.
4

Approve & Execute

Approval callbacks gate execution. Approved tools run; results are appended to history.
5

Synthesize

The LLM continues with the tool result in context until it produces a final answer.
6

Return

chat() returns the final assistant message as a str.

Agent Types at a Glance

FeatureBasicAgentAgentSmartAgentCopilotAgentMCPAgent
Multi-turn chat
Custom tools
Auto-schema from functions
Built-in tools (web, bash, notes…)
Web search
Bash execution
Cron scheduling
Persistent memory
Approval workflows
Streaming
Skills
Project mode
Coding convenience methods
MCP servers
Custom approval callbacks

When to Use Each Agent

BasicAgent

Best for: Learning, prototypes, simple Q&A bots.Minimum viable configuration — just pick a provider and call chat(). No tools, no approval callbacks, no memory required.Setup time: ~2 minutes

Agent

Best for: Production applications with custom tools.Full-featured base class. Bring your own Python functions as tools, wire up approval callbacks, enable persistent memory, and stream tokens to the UI.Setup time: ~5 minutes

SmartAgent

Best for: Developer assistants, project-scoped work, iterative tasks.Ships with web search, bash, notes, datetime, cron, and memory tools out of the box. Supports solo and project modes with automatic learning capture.Setup time: ~5 minutes

CopilotAgent

Best for: Coding assistants, file operations, code review and generation.Pre-loaded with filesystem and execution tools, a coding-focused system prompt, and convenience methods: explain_code(), review_file(), write_code(), fix_bug().Setup time: ~3 minutes

MCPAgent

Best for: Workflows that require many external tools via MCP servers.Connects to MCP servers for external tool integrations. Supports deferred tool loading to handle hundreds of tools without hitting context limits.Setup time: ~10 minutes

Quick Code Examples

from logicore.agents.agent_basic import BasicAgent
import asyncio

agent = BasicAgent(provider="ollama", model="qwen2:7b")
response = asyncio.run(agent.chat("What is machine learning?"))
print(response)

Execution Pipeline

When you call agent.chat(), here is what happens internally:
User Query


Context Assembly        ← session history + persistent memory + system prompt + tool schemas


LLM Inference           ← provider-normalized request (OpenAI / Ollama / Gemini / Azure / Groq)


Tool Decision?
  ├─ No  ──► Final Answer  ──► return str
  └─ Yes ──► Tool Approval Callback
                ├─ Denied ──► denial message appended to history, loop continues
                └─ Approved ──► Execute Tool ──► result appended ──► loop back to LLM
The loop runs up to max_iterations times (default 40) before returning "Max iterations reached.".

Approval & Safety Model

By default, every tool that is not on the built-in safe list requires explicit approval. You have two options:
# Option A: auto-approve everything (dev/demo only)
agent.set_auto_approve_all(True)

# Option B: selective approval callback (recommended for production)
async def my_approval(session_id: str, tool_name: str, args: dict) -> bool:
    if tool_name in {"delete_file", "execute_command"}:
        return False
    return True

agent.set_callbacks(on_tool_approval=my_approval)
Never use set_auto_approve_all(True) in production. Always gate destructive tools behind an approval callback.

Streaming

All agent types that support streaming accept an on_token callback or a streaming_funct shorthand:
def print_token(token: str):
    print(token, end="", flush=True)

response = await agent.chat(
    "Explain Python's GIL in detail",
    stream=True,
    callbacks={"on_token": print_token},
)

Memory

Agents support two memory layers:
LayerScopeHow to enable
Session memoryCurrent conversationAutomatic — history is kept per session_id
Persistent memoryAcross sessionsPass memory=True to the constructor
# Persistent memory example
agent = Agent(llm="ollama", memory=True)
await agent.chat("The deployment timeout is 30 seconds.")

# Later session — agent retrieves the fact via RAG memory tool
agent2 = Agent(llm="ollama", memory=True)
await agent2.chat("What is the deployment timeout?")

Next Steps

  • BasicAgent — minimal setup, @tool decorator, factory function
  • Agent — constructor params, all methods, multi-session patterns
  • SmartAgent — modes, built-in tools, project workflows
  • MCPAgent — MCP servers, deferred tool loading, dynamic tool discovery

Build docs developers (and LLMs) love