Skip to main content
Logicore requires Python 3.10 or later. It is tested against Python 3.10, 3.11, 3.12, and 3.13.
python --version   # must be >= 3.10
Install the core package with pip:
pip install logicore
For a specific provider, install the matching extra:
pip install "logicore[gemini]"      # Google Gemini
pip install "logicore[groq]"        # Groq
pip install "logicore[ollama]"      # Local Ollama models
pip install "logicore[azure]"       # Azure OpenAI / AI Foundry
pip install "logicore[anthropic]"   # Anthropic Claude
pip install "logicore[all]"         # Every provider + all optional deps
Built-in tools that handle PDFs, Word, PowerPoint, and Excel files live behind the tools extra:
pip install "logicore[tools]"
This installs pypdf, python-docx, python-pptx, openpyxl, and playwright.
Create a short script and run it:
import asyncio
from logicore.providers.ollama_provider import OllamaProvider
from logicore.agents.agent import Agent

async def main():
    provider = OllamaProvider(model_name="qwen3.5:0.8b")
    agent = Agent(llm=provider, role="Greeter")
    response = await agent.chat("Say hello!")
    print(response)

asyncio.run(main())
If you see a greeting printed, the installation is working correctly.
It is strongly recommended. Use venv or any compatible tool:
python -m venv .venv
source .venv/bin/activate   # Windows: .venv\Scripts\activate
pip install logicore
No. Logicore supports multiple providers through a unified agent API. You can use:
  • Ollama — fully local, no API key required
  • Gemini — Google’s cloud models
  • Groq — fast cloud inference
  • Azure OpenAI / AI Foundry / AI Inference — enterprise Azure hosting
  • Anthropic — Claude models via Azure AI Foundry or direct
Your agent code does not change when you swap providers:
agent = Agent(llm="ollama")   # local
agent = Agent(llm="gemini")   # cloud
agent = Agent(llm="groq")     # fast inference
Use environment variables — never hardcode secrets in source code:
# Gemini
export GEMINI_API_KEY="your-key"

# OpenAI / Azure
export OPENAI_API_KEY="your-key"
export AZURE_API_KEY="your-key"
export AZURE_ENDPOINT="https://your-resource.openai.azure.com"

# Groq
export GROQ_API_KEY="your-key"

# Anthropic
export ANTHROPIC_API_KEY="your-key"
You can also load them from a .env file with python-dotenv, which is included as a core dependency.
First ensure Ollama is installed and running, then pull the model you want:
ollama run qwen3.5:0.8b
Then point the agent at it:
from logicore.agents.agent import Agent

agent = Agent(llm="ollama", model="qwen3.5:0.8b")
No API key is needed for local Ollama usage.
Pass a model_type argument to AzureProvider:
from logicore.providers.azure_provider import AzureProvider

provider = AzureProvider(
    model_name="gpt-4o-mini",
    endpoint="https://your-resource.openai.azure.com",
    api_key="your-key",
    model_type="openai"   # "openai" | "anthropic" | "inference"
)
If model_type is omitted, Logicore auto-detects it from the endpoint and deployment name.
Yes — that is one of Logicore’s primary design goals. The provider is injected at construction time; all tool schemas, approval workflows, memory, and streaming callbacks remain unchanged.
# Exact same agent logic, different provider
agent_local = Agent(llm="ollama", tools=[my_tool])
agent_cloud = Agent(llm="gemini", tools=[my_tool])
Pass any Python function with a docstring and type hints to the tools parameter:
def get_stock_price(ticker: str, **kwargs) -> float:
    """Returns the current price for a stock ticker symbol."""
    ...

agent = Agent(llm="ollama", tools=[get_stock_price])
Logicore automatically parses the type hints into a JSON schema and the docstring into a description. The **kwargs absorbs any hallucinated parameters, which improves reliability with local models.
Pass tools=True to use the full default registry:
agent = Agent(llm="ollama", tools=True)
Or load them after construction:
agent.load_default_tools()
Built-in categories include: filesystem, code execution, git, web search, document handling, Office/PDF, media search, and cron scheduling.
By default, each tool call awaits an approval callback. For development or trusted environments you can auto-approve:
agent.set_auto_approve_all(True)
For production, supply a custom callback to allow or deny individual tools:
async def approve_tool(session_id, tool_name, args):
    if tool_name == "delete_file":
        return False   # deny destructive operations
    return True

agent.set_callbacks(on_tool_approval=approve_tool)
Common causes:
  • Missing docstring — The tool description is extracted from the docstring. Without one, the LLM has no signal for when to call it.
  • Tool not registered — Confirm tools=[your_function] is in the constructor or that agent.add_tool(fn) was called.
  • Debug mode off — Enable debug=True to see what the LLM receives and returns.
agent = Agent(llm="ollama", tools=[my_tool], debug=True)
Set max_iterations in the constructor:
agent = Agent(llm="ollama", tools=[...], max_iterations=5)
The default is 20. Once the limit is reached the agent returns the current best answer.
Yes. Set memory=True and the agent stores and retrieves facts using a vector-backed store (LanceDB via AgentrySimpleMem):
agent = Agent(llm="ollama", memory=True)

await agent.chat("My name is Alice")
await agent.chat("What is my name?")  # → "Your name is Alice"
Memory entries are atomically extracted from conversation turns and embedded for semantic retrieval.
TypeScopeStorage
Short-termActive session conversation historyIn-memory list of messages
Long-termFacts extracted across sessionsLanceDB vector table (persistent on disk)
You can select the type via memory_type:
agent = Agent(llm="ollama", memory=True, memory_type="long_term")
Options are "default", "short_term", and "long_term".
AgentrySimpleMem applies a filtering and scoring pass before writing to the vector store. It skips:
  • Small talk and vague acknowledgements
  • Transient reminder chatter
  • Low-signal content
Only atomic, high-signal facts with a score above the threshold are persisted. This reduces memory contamination and keeps retrieval relevant.
Yes. Pass stream=True and an on_token callback:
def on_token(token):
    print(token, end="", flush=True)

response = await agent.chat(
    "Explain quantum computing.",
    callbacks={"on_token": on_token},
    stream=True
)
Logicore also extracts hidden <think> reasoning tokens from models that support them (such as qwen3.5 and DeepSeek variants), letting your UI display the model’s reasoning in real time.
The docs site is built with Mintlify. From the repository root:
npm install
npm run dev
The dev server starts at http://localhost:3000 by default.

Build docs developers (and LLMs) love