Skip to main content
This guide walks through the core building blocks: creating an agent, registering a tool, streaming output, controlling tool approval, and holding a multi-turn conversation.
1

Install Logicore

pip install logicore
For local models, also install the Ollama extra and pull a model:
pip install "logicore[ollama]"
ollama run qwen3.5:0.8b
See the Installation guide for all provider options and environment variable setup.
2

Create your first agent

An Agent takes a provider string or a provider instance and responds to chat() calls:
from logicore.agents.agent import Agent
import asyncio

async def main():
    # Pass a provider name as a string: "ollama", "openai", "gemini", "groq", "azure"
    agent = Agent(llm="ollama")

    response = await agent.chat("What is an AI agent?")
    print(response)

asyncio.run(main())
Run it:
python agent.py
The Agent constructor accepts several optional parameters:
ParameterDefaultDescription
llm"ollama"Provider string or LLMProvider instance
modelNoneModel name override
role"general"Shapes the system prompt persona
system_messageNoneCustom system prompt (overrides role)
tools[]List of Python callables to register
max_iterations40Max tool-call loop iterations
debugFalseVerbose logging
memoryFalseEnable persistent memory
context_compressionFalseSummarize old messages when context grows long
3

Add a tool

Tools are plain Python functions. Logicore reads the function signature, type hints, and docstring to build the JSON schema that gets sent to the LLM — no decorators or schema files needed.
def check_weather(location: str, unit: str = "fahrenheit", **kwargs) -> dict:
    """
    Fetches the current weather for a specific location.

    Args:
        location (str): The city or zip code to look up.
        unit (str): The temperature unit. Options: 'fahrenheit', 'celsius'.

    Returns:
        dict: Weather information with temperature and conditions.
    """
    if "seattle" in location.lower():
        return {"temperature": 72, "conditions": "sunny", "unit": unit}
    return {"temperature": 65, "conditions": "cloudy", "unit": unit}
Always include **kwargs in tool signatures. Local models sometimes generate extra parameters that aren’t in the schema; **kwargs absorbs them gracefully instead of raising a TypeError.
Register the function by passing it to the tools list:
from logicore.agents.agent import Agent
import asyncio

def check_weather(location: str, unit: str = "fahrenheit", **kwargs) -> dict:
    """Fetches the current weather for a specific location."""
    if "seattle" in location.lower():
        return {"temperature": 72, "conditions": "sunny", "unit": unit}
    return {"temperature": 65, "conditions": "cloudy", "unit": unit}

async def main():
    agent = Agent(
        llm="ollama",
        tools=[check_weather]
    )

    response = await agent.chat("What's the weather in Seattle?")
    print(response)

asyncio.run(main())
The agent now:
  1. Receives the user question
  2. Decides whether to call check_weather
  3. Executes the function and captures the return value
  4. Synthesizes a final natural-language answer

How schema auto-generation works

Logicore converts your function into a JSON schema automatically:
def check_weather(location: str, unit: str = "fahrenheit", **kwargs) -> dict:
    """Fetches the current weather for a specific location."""
    ...
The conversion rules are:
  • Type hints → JSON Schema types (str"string", etc.)
  • Docstring Args: block → per-parameter description fields
  • Default values → parameters are excluded from required
  • **kwargs → absorbs any extra parameters the LLM hallucinates
4

Enable streaming

Pass an on_token callback and stream=True to receive tokens as they arrive instead of waiting for the full response:
async def main():
    agent = Agent(llm="ollama", tools=[check_weather])

    def on_token(token):
        print(token, end="", flush=True)

    response = await agent.chat(
        "What's the weather in Seattle?",
        callbacks={"on_token": on_token},
        stream=True
    )
    print("\nFinal:", response)

asyncio.run(main())
The on_token callback fires for every token in the streaming response. response holds the complete assembled text when the call returns.
5

Control tool approval

By default, tools require approval before execution. For development and safe read-only tools, enable auto-approval:
agent = Agent(llm="ollama", tools=[check_weather])
agent.set_auto_approve_all(True)  # All tools execute without a confirmation prompt

response = await agent.chat("What's the weather in Seattle?")
For finer control, provide a custom approval callback. The callback receives the session ID, tool name, and call arguments, and returns True to allow or False to deny:
async def approve_tool(session_id, tool_name, args):
    if tool_name == "delete_file":
        return False  # Deny destructive operations
    return True  # Auto-approve everything else

agent.set_callbacks(on_tool_approval=approve_tool)
Never auto-approve tools that write to disk, execute shell commands, or make network requests in production. Use a custom approval callback to gate those operations.
6

Hold a multi-turn conversation

Agents accumulate conversation history across chat() calls automatically. Each turn sees the full prior context:
agent = Agent(llm="ollama", tools=[check_weather])

# Turn 1
response1 = await agent.chat("What's the weather in Seattle?")

# Turn 2 — agent remembers the Seattle result
response2 = await agent.chat("How about New York?")

# Turn 3 — agent compares both results from history
response3 = await agent.chat("Which city is warmer?")
print(response3)
Each turn is appended to the session’s message list. No extra configuration is needed for basic multi-turn use.

Complete working example

The following puts all the pieces together: a named role, a tool, streaming, and auto-approval:
from logicore.agents.agent import Agent
import asyncio

def check_weather(location: str, unit: str = "fahrenheit", **kwargs) -> dict:
    """Fetches the current weather for a specific location."""
    if "seattle" in location.lower():
        return {"temperature": 72, "conditions": "sunny", "unit": unit}
    return {"temperature": 65, "conditions": "cloudy", "unit": unit}

async def main():
    agent = Agent(
        llm="ollama",
        tools=[check_weather],
        role="Weather Assistant",
        system_message="Use the check_weather tool to answer weather questions."
    )
    agent.set_auto_approve_all(True)

    def on_token(token):
        print(token, end="", flush=True)

    response = await agent.chat(
        "What's the weather in Seattle?",
        callbacks={"on_token": on_token},
        stream=True
    )

    print("\n\nFinal response:", response)

asyncio.run(main())

Next steps

Concepts

Understand agents, skills, sessions, and memory in depth.

Skills

Load pre-built capability packs for web research, code review, and more.

API reference

Complete reference for all classes and methods.

Troubleshooting

  • Confirm the tool is in the tools list: tools=[check_weather]
  • Make sure the function has a docstring — Logicore uses it as the tool description sent to the LLM
  • Enable debug logging: Agent(..., debug=True) to see the full tool schema and LLM requests
Local models (Ollama) occasionally generate parameter names that aren’t in the schema. Add **kwargs to your tool function to absorb them:
def my_tool(param: str, **kwargs) -> dict:
    ...
  • Check that your tool function doesn’t block the event loop. Use async def and await for I/O-bound work.
  • Set max_iterations to a lower value to prevent the agent from looping indefinitely: Agent(..., max_iterations=5)
ProviderFix
OllamaRun ollama serve to start the local server
OpenAISet the OPENAI_API_KEY environment variable
AzureSet AZURE_ENDPOINT and AZURE_API_KEY (or AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY)
GeminiSet GEMINI_API_KEY
AnthropicSet ANTHROPIC_API_KEY
GroqSet GROQ_API_KEY

Build docs developers (and LLMs) love