Skip to main content
SmartAgent extends Agent with a curated set of built-in tools and two operating modes: solo for broad, open-ended tasks and project for context-driven, codebase-specific work. It is the recommended choice when you want practical capabilities (web search, bash, notes, scheduling) without manually wiring each tool.
from logicore.agents.agent_smart import SmartAgent, SmartAgentMode

Modes

General-purpose chat and reasoning. The agent is not bound to any project context and can explore broadly.
  • Ideal for discovery, brainstorming, and ad-hoc technical queries.
  • Web search, bash, notes, datetime, cron, and memory tools are all available.
  • No project context is injected — responses are scoped only by the conversation history.
agent = SmartAgent(llm="ollama", mode="solo")
response = await agent.chat("What are the latest Python packaging best practices?")

SmartAgentMode Constants

class SmartAgentMode:
    SOLO    = "solo"     # General chat, greater reasoning focus
    PROJECT = "project"  # Project-centered with context awareness
Pass these constants — or their string equivalents — to the mode constructor parameter and to set_mode().

Built-in Tools

SmartAgent loads a curated toolkit at initialization. Default tools are intentionally not loaded — the toolkit is lean and focused:
ToolCategoryDescription
web_searchWebSearch the web and return results
image_searchWebSearch for images with inline results
datetimeUtilityGet current date and time
notesUtilityCreate and retrieve persistent notes
memoryMemoryStore and retrieve facts via RAG
bashExecutionRun shell commands
add_cron_jobSchedulingSchedule recurring tasks
list_cron_jobsSchedulingList all scheduled jobs
remove_cron_jobSchedulingCancel a scheduled job
get_cronsSchedulingGet cron job details
You can extend the built-in toolkit with register_tool_from_function() or add_custom_tool() inherited from Agent. Custom tools are merged with the built-in set.

Constructor

SmartAgent(
    llm: str | LLMProvider = "ollama",
    model: str = None,
    api_key: str = None,
    mode: str = "solo",
    project_id: str = None,
    debug: bool = False,
    telemetry: bool = False,
    memory: bool = False,
    max_iterations: int = 40,
    capabilities: Any = None,
    skills: list = None,
    workspace_root: str = None,
)
llm
str or LLMProvider
default:"ollama"
LLM provider. Pass a string shorthand ("ollama", "openai", "gemini", "groq", "azure") or a LLMProvider instance. Determines backend routing for all model calls.
model
str
default:"None"
Provider-specific model name. Always specify explicitly in production for consistent behavior.
api_key
str
default:"None"
API key for cloud providers. Not required for ollama.
mode
str
default:"solo"
Operating mode: "solo" or "project". Controls the system prompt template and whether project context is injected. Can be changed at runtime via set_mode(), switch_to_project(), or switch_to_solo().
project_id
str
default:"None"
ID of an existing project to bind at initialization. When set alongside mode="project", the agent immediately loads that project’s context. Create projects with create_project() before using this.
debug
bool
default:"False"
Verbose logging — prints tool names, mode switches, and learning capture events.
memory
bool
default:"False"
Enable persistent memory indexing via AgentrySimpleMem. Independent of the built-in memory tool, which is always available for explicit RAG lookups.
max_iterations
int
default:"40"
Maximum tool-call iterations per chat() call.
skills
list
default:"None"
Additional skill names or Skill objects to load at startup.
workspace_root
str
default:"None"
Root directory for filesystem and bash tools. Set this when the agent should only operate within a specific project directory.

chat()

await agent.chat(
    user_input: str | list,
    session_id: str = "default",
    stream: bool = False,
    generate_walkthrough: bool = False,
    **kwargs,
) -> str
Enhanced version of Agent.chat(). In project mode, it prepends the project’s stored context to the session before calling the LLM. After a response, it scans for significant learnings and auto-stores them.
user_input
str or list
required
User message. Accepts plain text or a multimodal content list.
session_id
str
default:"default"
Conversation thread. Same ID preserves history across turns.
stream
bool
default:"False"
Enable token streaming with on_token callback.
generate_walkthrough
bool
default:"False"
Append an LLM-generated execution summary.
Returns: str — final assistant message.

Project Management Methods

create_project()

agent.create_project(
    project_id: str,
    title: str,
    goal: str = "",
    environment: dict[str, str] = None,
    key_files: list[str] = None,
) -> ProjectContext
Create a new project and persist it to project memory. Does not automatically switch to the project — call switch_to_project() after.
agent.create_project(
    project_id="api-core",
    title="API Core",
    goal="Build stable authentication APIs with FastAPI",
    environment={"FRAMEWORK": "fastapi", "PYTHON": "3.12"},
    key_files=["src/", "tests/", "pyproject.toml"],
)

switch_to_project(project_id)

Load a project and switch to project mode. Rebuilds the system prompt with the project context.
project = agent.switch_to_project("api-core")
if project:
    print(f"Switched to: {project.title}")
Returns the ProjectContext if found, None if the project ID does not exist.

switch_to_solo()

Switch back to solo mode and clear the active project context.
agent.switch_to_solo()

set_mode(mode, project_id=None)

Low-level mode switch. Updates the system prompt and all active sessions.
agent.set_mode(SmartAgentMode.PROJECT, project_id="api-core")
agent.set_mode(SmartAgentMode.SOLO)

list_projects()

Return all projects stored in project memory.
for project in agent.list_projects():
    print(f"{project.project_id}: {project.title}")

Memory Methods

remember(memory_type, title, content, tags)

Store a memory entry directly without going through the LLM:
await agent.remember(
    memory_type="learning",
    title="Auth middleware pattern",
    content="Use JWT verification middleware before route handlers.",
    tags=["auth", "fastapi"],
)

recall(query, limit=5)

Search stored memories:
entries = await agent.recall("authentication middleware", limit=3)
for entry in entries:
    print(entry.title, entry.content)

Reasoning Helper

reason(problem, session_id)

Explicitly request step-by-step chain-of-thought reasoning for a complex problem:
conclusion = await agent.reason(
    "Should we use JWT or session cookies for our API auth layer?"
)
print(conclusion)

Status

info: dict = agent.status()
Returns a dict with:
KeyTypeDescription
modestrCurrent mode ("solo" or "project")
project_idstr | NoneActive project ID
project_titlestr | NoneActive project title
modelstrProvider model name
tools_loadedintNumber of registered tools
sessions_activeintNumber of live sessions
memory_entriesintMemory entries in active project

Automatic Learning Capture

In project mode, SmartAgent scans each response for significant learning indicators and automatically stores qualifying snippets in project memory:
Learning indicators: "the solution is", "best practice", "key insight",
                     "remember to", "the pattern is", "recommendation", ...
Casual responses (greetings, short confirmations) are filtered out. Only one learning is captured per response to avoid noise.
Auto-captured learnings are tagged ["auto-captured"] and stored under the active project_id. Use recall() or the memory tool to retrieve them in future sessions.

Examples

from logicore.agents.agent_smart import SmartAgent
import asyncio

async def main():
    agent = SmartAgent(llm="ollama", mode="solo")
    response = await agent.chat("Find the latest Python async best practices")
    print(response)

asyncio.run(main())
The agent will use web_search to find current information and synthesize it into an answer.

Comparison: Solo vs. Project Mode

AspectSoloProject
System promptGeneric SmartAgent promptProject goal + environment + key files injected
Learning captureOffOn — significant learnings auto-stored
Project contextNoneLoaded from ProjectMemory on each chat()
recall() scopeGlobalScoped to project_id
Best forExploration, brainstormingSustained delivery work

Extends Agent

SmartAgent inherits all methods from Agent:
  • register_tool_from_function(), add_custom_tool(), load_skill()
  • get_session(), clear_session()
  • set_callbacks(), set_auto_approve_all()
  • get_execution_summary(), print_execution_summary()
See the Agent reference for full documentation of inherited methods.

Build docs developers (and LLMs) love