Install Logicore
Create your first agent
An Run it:The
Agent takes a provider string or a provider instance and responds to chat() calls:Agent constructor accepts several optional parameters:| Parameter | Default | Description |
|---|---|---|
llm | "ollama" | Provider string or LLMProvider instance |
model | None | Model name override |
role | "general" | Shapes the system prompt persona |
system_message | None | Custom system prompt (overrides role) |
tools | [] | List of Python callables to register |
max_iterations | 40 | Max tool-call loop iterations |
debug | False | Verbose logging |
memory | False | Enable persistent memory |
context_compression | False | Summarize old messages when context grows long |
Add a tool
Tools are plain Python functions. Logicore reads the function signature, type hints, and docstring to build the JSON schema that gets sent to the LLM — no decorators or schema files needed.Register the function by passing it to the The agent now:The conversion rules are:
tools list:- Receives the user question
- Decides whether to call
check_weather - Executes the function and captures the return value
- Synthesizes a final natural-language answer
How schema auto-generation works
Logicore converts your function into a JSON schema automatically:- Type hints → JSON Schema types (
str→"string", etc.) - Docstring
Args:block → per-parameterdescriptionfields - Default values → parameters are excluded from
required **kwargs→ absorbs any extra parameters the LLM hallucinates
Enable streaming
Pass an The
on_token callback and stream=True to receive tokens as they arrive instead of waiting for the full response:on_token callback fires for every token in the streaming response. response holds the complete assembled text when the call returns.Control tool approval
By default, tools require approval before execution. For development and safe read-only tools, enable auto-approval:For finer control, provide a custom approval callback. The callback receives the session ID, tool name, and call arguments, and returns
True to allow or False to deny:Complete working example
The following puts all the pieces together: a named role, a tool, streaming, and auto-approval:Next steps
Concepts
Understand agents, skills, sessions, and memory in depth.
Skills
Load pre-built capability packs for web research, code review, and more.
API reference
Complete reference for all classes and methods.
Troubleshooting
Agent not using the tool
Agent not using the tool
- Confirm the tool is in the
toolslist:tools=[check_weather] - Make sure the function has a docstring — Logicore uses it as the tool description sent to the LLM
- Enable debug logging:
Agent(..., debug=True)to see the full tool schema and LLM requests
TypeError from hallucinated parameters
TypeError from hallucinated parameters
Local models (Ollama) occasionally generate parameter names that aren’t in the schema. Add
**kwargs to your tool function to absorb them:Tool execution hangs
Tool execution hangs
- Check that your tool function doesn’t block the event loop. Use
async defandawaitfor I/O-bound work. - Set
max_iterationsto a lower value to prevent the agent from looping indefinitely:Agent(..., max_iterations=5)
Provider connection errors
Provider connection errors
| Provider | Fix |
|---|---|
| Ollama | Run ollama serve to start the local server |
| OpenAI | Set the OPENAI_API_KEY environment variable |
| Azure | Set AZURE_ENDPOINT and AZURE_API_KEY (or AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY) |
| Gemini | Set GEMINI_API_KEY |
| Anthropic | Set ANTHROPIC_API_KEY |
| Groq | Set GROQ_API_KEY |