Skip to main content
Graphiti Hero Light

What is Graphiti?

Graphiti is a Python framework for building and querying temporally-aware knowledge graphs, specifically designed for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it ideal for developing interactive, context-aware AI applications.

Real-Time Updates

Immediate integration of new data episodes without batch recomputation

Bi-Temporal Data Model

Explicit tracking of event occurrence and ingestion times for accurate point-in-time queries

Hybrid Retrieval

Combines semantic embeddings, keyword (BM25), and graph traversal for low-latency queries

Custom Entity Definitions

Flexible ontology creation with developer-defined entities through Pydantic models

Key Use Cases

Graphiti enables powerful capabilities for AI applications:
  • Agent Memory: Give AI agents persistent, queryable memory of past interactions and learned facts
  • Dynamic Data Integration: Continuously maintain user interactions and business data
  • State-Based Reasoning: Facilitate task automation and decision-making for agents
  • Complex Data Querying: Query evolving data with semantic, keyword, and graph-based search methods

How Knowledge Graphs Work

A knowledge graph is a network of interconnected facts. Each fact is a “triplet” represented by:
  • Two entities (nodes) - for example: “Kendra”, “Adidas shoes”
  • A relationship (edge) connecting them - for example: “loves”
What makes Graphiti unique is its ability to autonomously build a knowledge graph while handling changing relationships and maintaining historical context. Graphiti temporal walkthrough

Why Choose Graphiti?

Traditional RAG approaches often rely on batch processing and static data summarization, making them inefficient for frequently changing data. Graphiti addresses these challenges:
Add new data episodes immediately without batch recomputation. Graphiti processes each episode incrementally, extracting entities and relationships in real-time.
Track both when events occurred and when they were ingested into the system. This enables accurate point-in-time queries and contradiction handling through temporal edge invalidation.
Achieve low-latency queries without relying on LLM summarization. Graphiti combines semantic embeddings, BM25 keyword search, and graph traversal for typically sub-second latency.
Define your own entity types and relationships using straightforward Pydantic models. Create flexible ontologies tailored to your specific domain.
Efficiently manage large datasets with parallel processing, suitable for enterprise environments. Works with multiple graph database backends including Neo4j, FalkorDB, Kuzu, and Amazon Neptune.

Graphiti vs. GraphRAG

Graphiti is optimized for dynamic, continuously updating data rather than static document summarization:
AspectGraphRAGGraphiti
Primary UseStatic document summarizationDynamic data management
Data HandlingBatch-oriented processingContinuous, incremental updates
Knowledge StructureEntity clusters & community summariesEpisodic data, semantic entities, communities
Retrieval MethodSequential LLM summarizationHybrid semantic, keyword, and graph-based search
AdaptabilityLowHigh
Temporal HandlingBasic timestamp trackingExplicit bi-temporal tracking
Contradiction HandlingLLM-driven summarization judgmentsTemporal edge invalidation
Query LatencySeconds to tens of secondsTypically sub-second latency
Custom Entity TypesNoYes, customizable
ScalabilityModerateHigh, optimized for large datasets

Architecture

Graphiti uses a pluggable driver architecture, making the core framework backend-agnostic:
  • Graph Databases: Neo4j, FalkorDB, Kuzu, Amazon Neptune
  • LLM Providers: OpenAI, Azure OpenAI, Anthropic, Google Gemini, Groq, Ollama
  • Embeddings: OpenAI, Azure OpenAI, Voyage AI, Google Gemini, local models
  • Search: Hybrid semantic + keyword (BM25) + graph traversal

Graphiti and Zep

Graphiti powers the core of Zep’s context engineering platform for AI Agents. Using Graphiti, Zep demonstrates State of the Art in Agent Memory. Read the paper: Zep: A Temporal Knowledge Graph Architecture for Agent Memory.

Zep vs Graphiti

AspectZepGraphiti
What they areFully managed platform for context engineering and AI memoryOpen-source graph framework
User & conversation managementBuilt-in users, threads, and message storageBuild your own
Retrieval & performancePre-configured, production-ready retrieval with sub-200ms performance at scaleCustom implementation required; performance depends on your setup
Developer toolsDashboard with graph visualization, debug logs, API logs; SDKs for Python, TypeScript, and GoBuild your own tools
Enterprise featuresSLAs, support, security guaranteesSelf-managed
DeploymentFully managed or in your cloudSelf-hosted only
Choose Zep if you want a turnkey, enterprise-grade platform with security, performance, and support baked in. Choose Graphiti if you want a flexible OSS core and you’re comfortable building/operating the surrounding system.

Next Steps

Quickstart

Get up and running with Graphiti in minutes

Installation

Install Graphiti with your preferred package manager and backend

Build docs developers (and LLMs) love