What Python version does Logicore require?
What Python version does Logicore require?
How do I install Logicore?
How do I install Logicore?
How do I install built-in document and office tools?
How do I install built-in document and office tools?
tools extra:pypdf, python-docx, python-pptx, openpyxl, and playwright.How do I verify the installation?
How do I verify the installation?
Do I need a virtual environment?
Do I need a virtual environment?
venv or any compatible tool:Do I need a specific model provider to use Logicore?
Do I need a specific model provider to use Logicore?
- Ollama — fully local, no API key required
- Gemini — Google’s cloud models
- Groq — fast cloud inference
- Azure OpenAI / AI Foundry / AI Inference — enterprise Azure hosting
- Anthropic — Claude models via Azure AI Foundry or direct
Where do I set my API keys?
Where do I set my API keys?
.env file with python-dotenv, which is included as a core dependency.How do I use a local Ollama model?
How do I use a local Ollama model?
How do I use Azure AI Foundry or Azure AI Inference endpoints?
How do I use Azure AI Foundry or Azure AI Inference endpoints?
model_type argument to AzureProvider:model_type is omitted, Logicore auto-detects it from the endpoint and deployment name.Can I switch providers without rewriting my agent logic?
Can I switch providers without rewriting my agent logic?
How do I register a custom tool?
How do I register a custom tool?
tools parameter:**kwargs absorbs any hallucinated parameters, which improves reliability with local models.How do I enable built-in tools?
How do I enable built-in tools?
tools=True to use the full default registry:How do tools get approved before execution?
How do tools get approved before execution?
Why is my agent not calling the tool I registered?
Why is my agent not calling the tool I registered?
- Missing docstring — The tool description is extracted from the docstring. Without one, the LLM has no signal for when to call it.
- Tool not registered — Confirm
tools=[your_function]is in the constructor or thatagent.add_tool(fn)was called. - Debug mode off — Enable
debug=Trueto see what the LLM receives and returns.
How do I prevent infinite tool-call loops?
How do I prevent infinite tool-call loops?
max_iterations in the constructor:Does Logicore support persistent memory across sessions?
Does Logicore support persistent memory across sessions?
memory=True and the agent stores and retrieves facts using a vector-backed store (LanceDB via AgentrySimpleMem):What is the difference between short-term and long-term memory?
What is the difference between short-term and long-term memory?
| Type | Scope | Storage |
|---|---|---|
| Short-term | Active session conversation history | In-memory list of messages |
| Long-term | Facts extracted across sessions | LanceDB vector table (persistent on disk) |
memory_type:"default", "short_term", and "long_term".How does Logicore decide what to store in long-term memory?
How does Logicore decide what to store in long-term memory?
AgentrySimpleMem applies a filtering and scoring pass before writing to the vector store. It skips:- Small talk and vague acknowledgements
- Transient reminder chatter
- Low-signal content
Does Logicore support streaming?
Does Logicore support streaming?
stream=True and an on_token callback:<think> reasoning tokens from models that support them (such as qwen3.5 and DeepSeek variants), letting your UI display the model’s reasoning in real time.How do I run the documentation site locally?
How do I run the documentation site locally?
http://localhost:3000 by default.