How Fenic Works with Agent Frameworks
The Fenic Approach
| Without Fenic | With Fenic |
|---|---|
| Agent summarizes conversation → tokens consumed | Fenic summarizes → agent gets result; less context bloat |
| Agent extracts facts → tokens consumed | Fenic extracts → agent gets structured data |
| Agent searches, filters, aggregates → multiple tool calls | Fenic pre-computes → agent gets precise rows |
| Context ops compete with reasoning | Less context bloat → agents stay focused on reasoning |
Integration Methods
Fenic provides two ways to integrate with agent frameworks:1. MCP Tools (Recommended)
Expose Fenic context as MCP tools that any framework can call:2. Direct Python Functions
Call Fenic directly from your agent code:Framework-Specific Examples
LangGraph
Use Fenic to build context, then expose as LangGraph tools:PydanticAI
Fenic’s typed DataFrames work naturally with PydanticAI’s type system:CrewAI
Expose Fenic context as CrewAI tools:Custom Frameworks
Any framework that calls Python functions can use Fenic:Real-World Pattern: Memory & Retrieval
Build curated memory packs and retrieval systems that agents can query:Context Operations (Inference Offloaded)
These operations happen outside your agent’s context window, reducing bloat:Summarization
Extraction
Classification
Memory Patterns
Blocks & Episodes
Maintain a profile block with recent timeline:Decaying Resolution
Compress older memories with time windows:Best Practices
Design Principles
- Build context once, use everywhere: Create Fenic context tables that multiple agents can query
- Offload inference: Let Fenic handle extraction, embedding, summarization outside agent loops
- Bounded surfaces: Expose precise, capped tool responses to prevent context bloat
- Type safety: Use Pydantic schemas for extraction to ensure agents get structured data
Performance
- Cache expensive semantic operations in tables
- Use
result_limitto cap tool responses - Index frequently queried columns
- Pre-compute embeddings rather than computing on-demand
Agent Behavior
- Provide clear tool descriptions to guide agent behavior
- Design tools for specific use cases (not generic database access)
- Use table descriptions to explain data semantics
- Start agents with schema exploration tools before querying
Example: Customer Support Agent
Complete example showing Fenic + any agent framework:Framework Comparison
| Framework | Integration Method | Best For |
|---|---|---|
| LangGraph | MCP + LangChain tools | Complex multi-agent workflows |
| PydanticAI | Direct Python + MCP | Type-safe agent development |
| CrewAI | Custom tools + MCP | Multi-agent collaboration |
| AutoGen | Function calling + MCP | Conversational agents |
| Custom | Direct Python calls | Full control over agent logic |
- Fenic’s inference offloading (less context bloat)
- Pre-computed context (faster agent runs)
- Typed, bounded tools (more reliable behavior)
Next Steps
MCP Server
Learn how to build and serve MCP tools
Semantic Operations
Explore extraction, embedding, and classification
Examples
See complete agent projects using Fenic
LLM Providers
Configure OpenAI, Anthropic, Google, and more
