Skip to main content

Welcome to Logicore

Logicore is an enterprise-grade Python framework for building intelligent, autonomous AI agents that work seamlessly across any LLM provider — whether local (Ollama), cloud-based (OpenAI, Gemini, Azure, Groq), or hybrid. Write your agent logic once. Deploy it against any provider without changing a single line of code.

Quickstart

Get a working agent running in under 5 minutes

Installation

Install Logicore with your preferred provider

Agents

Explore the Agent, SmartAgent, and BasicAgent classes

Providers

Ollama, OpenAI, Gemini, Groq, Azure — all supported

What Logicore solves

ChallengeTraditional approachLogicore solution
Provider lock-inChoose OpenAI → rewrite for Gemini → rewrite for OllamaWrite once, swap providers with a single parameter
Tool complexityManual JSON schema generation, parameter validation, error handlingAuto-generate schemas from Python docstrings and type hints
Token managementManual streaming, no reasoning extractionNative streaming with hidden <think> reasoning token extraction
Memory systemsDIY vector DBs, RAG pipelines, session managementBuilt-in persistent memory with semantic search
SchedulingExternal cron, Celery, AWS Lambda dependenciesNative agent-aware cron scheduler
Approval and safetyCustom approval workflows, tool restriction layersDeclarative approval system with per-tool policies

Core capabilities

Multi-provider orchestration

Switch between Ollama, OpenAI, Gemini, Groq, Azure, and Anthropic without touching your agent logic.

Zero-config tool integration

Turn any Python function into an LLM-callable tool. Logicore parses type hints and docstrings into JSON schemas automatically.

Native streaming + reasoning

Real-time token streaming with extraction of hidden <think> reasoning tokens from local models like DeepSeek and Qwen.

Persistent memory and RAG

Long-term conversational memory and semantic vector search so agents never lose context across sessions.

Built-in cron scheduler

Agents can schedule, manage, and execute their own background tasks without external infrastructure.

Skills and pre-built capabilities

Load domain-specific skill packs instantly: web research, code review, and custom skills you build yourself.

MCP integration

Connect to Model Context Protocol servers for dynamic external tools without any custom code.

Telemetry and observability

Full execution logs, telemetry, and debugging hooks for production monitoring and audit trails.

Architecture overview

Logicore uses a layered design that keeps your business logic separate from provider-specific plumbing:
Your Agent + Business Logic

Agent Orchestration
(session management, tool execution loop, approval workflows, telemetry)

Unified Provider Gateway
(streaming normalization, tool calling interface, error recovery)

Provider Backends
Ollama | OpenAI | Gemini | Groq | Azure | Anthropic
Each layer has a single responsibility. Provider backends communicate through a unified interface, so swapping from local to cloud inference is a one-line change.

Quick example

quickstart.py
import asyncio
from logicore.agents.agent import Agent

def check_weather(location: str, **kwargs) -> str:
    """Checks the current weather for a specific location."""
    if "seattle" in location.lower():
        return "72°F and sunny."
    return "65°F and cloudy."

async def main():
    agent = Agent(
        llm="ollama",
        role="Weather Assistant",
        tools=[check_weather]
    )
    agent.set_auto_approve_all(True)

    response = await agent.chat("What's the weather in Seattle?")
    print(response)

asyncio.run(main())
Switch to a cloud provider by changing one argument:
agent = Agent(llm="openai", tools=[check_weather])   # OpenAI
agent = Agent(llm="gemini", tools=[check_weather])   # Google Gemini
agent = Agent(llm="groq",   tools=[check_weather])   # Groq fast inference

Why Logicore

  • Simpler API: Less boilerplate, more intuitive
  • Native streaming: Built-in, not an afterthought
  • Zero vendor lock-in: LangChain favors OpenAI
  • Type-safe tools: Auto-schema from Python type hints, not manual YAML
  • Lightweight: No complex role definitions — agent equals logic
  • Real-time streaming: Full token-level feedback
  • Multi-provider native: AutoGen is biased toward Azure/OpenAI
  • Better memory: Semantic search and vector DB built-in
  • Open-source and local: Not locked to OpenAI infrastructure
  • Full control: Agents run in your process, not the cloud
  • Cost predictable: No per-API-call billing; use any provider
  • Custom logic: Agents execute Python directly, not sandboxed

Installation

pip install logicore
Install with provider extras:
pip install "logicore[ollama]"     # Local models
pip install "logicore[gemini]"     # Google Gemini
pip install "logicore[groq]"       # Groq fast inference
pip install "logicore[azure]"      # Azure OpenAI
pip install "logicore[anthropic]"  # Anthropic Claude
pip install "logicore[all]"        # All providers
New to Logicore? Start with the Quickstart to get a working agent in 5 minutes.

Build docs developers (and LLMs) love