Welcome to Logicore
Logicore is an enterprise-grade Python framework for building intelligent, autonomous AI agents that work seamlessly across any LLM provider — whether local (Ollama), cloud-based (OpenAI, Gemini, Azure, Groq), or hybrid. Write your agent logic once. Deploy it against any provider without changing a single line of code.Quickstart
Get a working agent running in under 5 minutes
Installation
Install Logicore with your preferred provider
Agents
Explore the Agent, SmartAgent, and BasicAgent classes
Providers
Ollama, OpenAI, Gemini, Groq, Azure — all supported
What Logicore solves
| Challenge | Traditional approach | Logicore solution |
|---|---|---|
| Provider lock-in | Choose OpenAI → rewrite for Gemini → rewrite for Ollama | Write once, swap providers with a single parameter |
| Tool complexity | Manual JSON schema generation, parameter validation, error handling | Auto-generate schemas from Python docstrings and type hints |
| Token management | Manual streaming, no reasoning extraction | Native streaming with hidden <think> reasoning token extraction |
| Memory systems | DIY vector DBs, RAG pipelines, session management | Built-in persistent memory with semantic search |
| Scheduling | External cron, Celery, AWS Lambda dependencies | Native agent-aware cron scheduler |
| Approval and safety | Custom approval workflows, tool restriction layers | Declarative approval system with per-tool policies |
Core capabilities
Multi-provider orchestration
Switch between Ollama, OpenAI, Gemini, Groq, Azure, and Anthropic without touching your agent logic.
Zero-config tool integration
Turn any Python function into an LLM-callable tool. Logicore parses type hints and docstrings into JSON schemas automatically.
Native streaming + reasoning
Real-time token streaming with extraction of hidden
<think> reasoning tokens from local models like DeepSeek and Qwen.Persistent memory and RAG
Long-term conversational memory and semantic vector search so agents never lose context across sessions.
Built-in cron scheduler
Agents can schedule, manage, and execute their own background tasks without external infrastructure.
Skills and pre-built capabilities
Load domain-specific skill packs instantly: web research, code review, and custom skills you build yourself.
MCP integration
Connect to Model Context Protocol servers for dynamic external tools without any custom code.
Telemetry and observability
Full execution logs, telemetry, and debugging hooks for production monitoring and audit trails.
Architecture overview
Logicore uses a layered design that keeps your business logic separate from provider-specific plumbing:Quick example
quickstart.py
Why Logicore
vs. LangChain
vs. LangChain
- Simpler API: Less boilerplate, more intuitive
- Native streaming: Built-in, not an afterthought
- Zero vendor lock-in: LangChain favors OpenAI
- Type-safe tools: Auto-schema from Python type hints, not manual YAML
vs. AutoGen
vs. AutoGen
- Lightweight: No complex role definitions — agent equals logic
- Real-time streaming: Full token-level feedback
- Multi-provider native: AutoGen is biased toward Azure/OpenAI
- Better memory: Semantic search and vector DB built-in
vs. OpenAI Assistants API
vs. OpenAI Assistants API
- Open-source and local: Not locked to OpenAI infrastructure
- Full control: Agents run in your process, not the cloud
- Cost predictable: No per-API-call billing; use any provider
- Custom logic: Agents execute Python directly, not sandboxed