Requirements
Before installing Graphiti, ensure you have:
Python 3.10 or higher
A graph database (one of the following):
Neo4j 5.26+
FalkorDB 1.1.2+
Kuzu 0.11.2+
Amazon Neptune Database Cluster or Neptune Analytics Graph
An LLM provider API key (OpenAI recommended)
Graphiti works best with LLM services that support Structured Output (such as OpenAI and Gemini). Using other services may result in incorrect output schemas and ingestion failures, particularly with smaller models.
Basic Installation
Install Graphiti using your preferred package manager:
pip install graphiti-core
The basic installation includes:
Neo4j driver support (default)
OpenAI LLM and embedding support
Core dependencies (Pydantic, tenacity, numpy)
Graph Database Backends
Graphiti supports multiple graph database backends through optional dependencies.
Neo4j
FalkorDB
Kuzu
Amazon Neptune
Neo4j is the default backend and is included in the basic installation. Setup
Install Neo4j Desktop
Download and install Neo4j Desktop for a user-friendly interface to manage Neo4j instances.
Create a Database
In Neo4j Desktop:
Create a new project
Add a local DBMS
Set a password
Start the DBMS
Configure Connection
from graphiti_core import Graphiti
graphiti = Graphiti(
uri = "bolt://localhost:7687" ,
user = "neo4j" ,
password = "your_password"
)
Neo4j Desktop provides a browser interface at http://localhost:7474 for visualizing your knowledge graph.
FalkorDB is a lightweight, Redis-compatible graph database that’s easy to set up with Docker. Installation pip install graphiti-core[falkordb]
# or
uv add graphiti-core[falkordb]
Setup with Docker docker run -p 6379:6379 -p 3000:3000 -it --rm falkordb/falkordb:latest
from graphiti_core import Graphiti
from graphiti_core.driver.falkordb_driver import FalkorDriver
driver = FalkorDriver(
host = "localhost" ,
port = 6379 ,
username = None , # Optional
password = None , # Optional
database = "my_graph" # Optional, defaults to "default_db"
)
graphiti = Graphiti( graph_driver = driver)
FalkorDB provides a browser interface at http://localhost:3000 when using the Docker image.
Kuzu is an embedded graph database that doesn’t require a separate server. Installation pip install graphiti-core[kuzu]
# or
uv add graphiti-core[kuzu]
from graphiti_core import Graphiti
from graphiti_core.driver.kuzu_driver import KuzuDriver
driver = KuzuDriver( db = "/tmp/graphiti.kuzu" )
graphiti = Graphiti( graph_driver = driver)
Kuzu is ideal for development and testing since it requires no server setup.
Amazon Neptune is a fully managed graph database service in AWS. Installation pip install graphiti-core[neptune]
# or
uv add graphiti-core[neptune]
from graphiti_core import Graphiti
from graphiti_core.driver.neptune_driver import NeptuneDriver
driver = NeptuneDriver(
host = "neptune-db://<cluster-endpoint>" , # For Neptune Database
# OR
# host="neptune-graph://<graph-identifier>", # For Neptune Analytics
aoss_host = "<opensearch-serverless-host>" ,
port = 8182 , # Optional, defaults to 8182
aoss_port = 443 # Optional, defaults to 443
)
graphiti = Graphiti( graph_driver = driver)
Amazon Neptune requires an OpenSearch Serverless collection as the full-text search backend.
LLM Providers
Graphiti supports multiple LLM providers. OpenAI is the default and recommended option.
OpenAI
Azure OpenAI
Anthropic
Google Gemini
Ollama (Local)
Groq
OpenAI is included in the basic installation and is the default LLM provider. Setup export OPENAI_API_KEY = your_openai_api_key
from graphiti_core import Graphiti
# Uses OpenAI by default
graphiti = Graphiti(uri, user, password)
Graphiti defaults to using gpt-4o-mini for inference and text-embedding-3-small for embeddings.
Use Azure’s OpenAI service for enterprise deployments. Setup from openai import AsyncOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client.azure_openai_client import AzureOpenAILLMClient
from graphiti_core.llm_client.config import LLMConfig
from graphiti_core.embedder.azure_openai import AzureOpenAIEmbedderClient
# Initialize Azure OpenAI client
azure_client = AsyncOpenAI(
base_url = "https://your-resource.openai.azure.com/openai/v1/" ,
api_key = "your-api-key" ,
)
# Create clients
llm_client = AzureOpenAILLMClient(
azure_client = azure_client,
config = LLMConfig(
model = "gpt-4o-mini" , # Your deployment name
small_model = "gpt-4o-mini"
)
)
embedder_client = AzureOpenAIEmbedderClient(
azure_client = azure_client,
model = "text-embedding-3-small" # Your deployment name
)
# Initialize Graphiti
graphiti = Graphiti(
uri, user, password,
llm_client = llm_client,
embedder = embedder_client,
)
Use Claude models for LLM inference. Installation pip install graphiti-core[anthropic]
# or
uv add graphiti-core[anthropic]
Setup from graphiti_core import Graphiti
from graphiti_core.llm_client.anthropic_client import AnthropicClient
from graphiti_core.llm_client.config import LLMConfig
llm_client = AnthropicClient(
config = LLMConfig(
api_key = "your-anthropic-api-key" ,
model = "claude-3-5-sonnet-20241022"
)
)
graphiti = Graphiti(
uri, user, password,
llm_client = llm_client
)
Use Google’s Gemini models for LLM, embeddings, and reranking. Installation pip install graphiti-core[google-genai]
# or
uv add graphiti-core[google-genai]
Setup from graphiti_core import Graphiti
from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig
from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig
from graphiti_core.cross_encoder.gemini_reranker_client import GeminiRerankerClient
api_key = "your-google-api-key"
graphiti = Graphiti(
uri, user, password,
llm_client = GeminiClient(
config = LLMConfig(
api_key = api_key,
model = "gemini-2.0-flash"
)
),
embedder = GeminiEmbedder(
config = GeminiEmbedderConfig(
api_key = api_key,
embedding_model = "embedding-001"
)
),
cross_encoder = GeminiRerankerClient(
config = LLMConfig(
api_key = api_key,
model = "gemini-2.5-flash-lite"
)
)
)
Run local LLMs for privacy and cost savings. Installation # Install Ollama from https://ollama.ai
# Pull models
ollama pull deepseek-r1:7b
ollama pull nomic-embed-text
Setup from graphiti_core import Graphiti
from graphiti_core.llm_client.config import LLMConfig
from graphiti_core.llm_client.openai_generic_client import OpenAIGenericClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
llm_config = LLMConfig(
api_key = "ollama" , # Placeholder
model = "deepseek-r1:7b" ,
small_model = "deepseek-r1:7b" ,
base_url = "http://localhost:11434/v1" ,
)
llm_client = OpenAIGenericClient( config = llm_config)
graphiti = Graphiti(
uri, user, password,
llm_client = llm_client,
embedder = OpenAIEmbedder(
config = OpenAIEmbedderConfig(
api_key = "ollama" ,
embedding_model = "nomic-embed-text" ,
embedding_dim = 768 ,
base_url = "http://localhost:11434/v1" ,
)
),
cross_encoder = OpenAIRerankerClient(
client = llm_client,
config = llm_config
),
)
Use OpenAIGenericClient (not OpenAIClient) for Ollama and other OpenAI-compatible providers.
Use Groq for fast inference with open-source models. Installation pip install graphiti-core[groq]
# or
uv add graphiti-core[groq]
Setup from graphiti_core import Graphiti
from graphiti_core.llm_client.groq_client import GroqClient
from graphiti_core.llm_client.config import LLMConfig
llm_client = GroqClient(
config = LLMConfig(
api_key = "your-groq-api-key" ,
model = "llama-3.3-70b-versatile"
)
)
graphiti = Graphiti(
uri, user, password,
llm_client = llm_client
)
Optional Dependencies
Graphiti offers several optional dependencies for extended functionality:
Voyage AI Embeddings
Use Voyage AI for state-of-the-art embeddings:
pip install graphiti-core[voyageai]
Use local embedding models:
pip install graphiti-core[sentence-transformers]
OpenTelemetry Tracing
Enable distributed tracing for observability:
pip install graphiti-core[tracing]
Install multiple optional dependencies at once:
pip install graphiti-core[falkordb,anthropic,google-genai,voyageai]
Concurrency Control
Graphiti’s ingestion pipelines support high concurrency. Control this with the SEMAPHORE_LIMIT environment variable:
# Default (low concurrency to avoid rate limits)
export SEMAPHORE_LIMIT = 10
# Higher concurrency for better performance (if your LLM provider allows)
export SEMAPHORE_LIMIT = 50
# Lower concurrency to avoid 429 rate limit errors
export SEMAPHORE_LIMIT = 5
By default, SEMAPHORE_LIMIT is set to 10 to help prevent 429 rate limit errors from your LLM provider. Increase this value if your provider supports higher throughput.
Environment Variables
Common environment variables for Graphiti:
# LLM Provider
OPENAI_API_KEY = your_openai_api_key
# Neo4j Connection
NEO4J_URI = bolt://localhost:7687
NEO4J_USER = neo4j
NEO4J_PASSWORD = password
# FalkorDB Connection
FALKORDB_HOST = localhost
FALKORDB_PORT = 6379
FALKORDB_USERNAME = # Optional
FALKORDB_PASSWORD = # Optional
# Amazon Neptune
NEPTUNE_HOST = neptune-db://your-cluster-endpoint
NEPTUNE_PORT = 8182
AOSS_HOST = your-opensearch-host
AOSS_PORT = 443
# Performance
SEMAPHORE_LIMIT = 10
# Telemetry (opt-out)
GRAPHITI_TELEMETRY_ENABLED = false
Telemetry
Graphiti collects anonymous usage statistics to improve the framework. This includes:
System information (OS, Python version)
Graphiti version
Provider choices (LLM, database, embedder)
Data NOT collected:
Personal information or identifiers
API keys or credentials
Your data, queries, or graph content
IP addresses or hostnames
Disable Telemetry
To opt out of telemetry:
export GRAPHITI_TELEMETRY_ENABLED = false
Or in your Python code:
import os
os.environ[ 'GRAPHITI_TELEMETRY_ENABLED' ] = 'false'
Telemetry is automatically disabled during test runs when pytest is detected.
Verify Installation
Verify your installation with this simple script:
import asyncio
from graphiti_core import Graphiti
from graphiti_core.nodes import EpisodeType
from datetime import datetime, timezone
async def verify ():
# Initialize Graphiti (adjust connection parameters)
graphiti = Graphiti(
"bolt://localhost:7687" ,
"neo4j" ,
"password"
)
try :
# Add a test episode
await graphiti.add_episode(
name = "Test Episode" ,
episode_body = "Test content" ,
source = EpisodeType.text,
reference_time = datetime.now(timezone.utc)
)
print ( "✓ Installation verified successfully!" )
finally :
await graphiti.close()
if __name__ == "__main__" :
asyncio.run(verify())
Next Steps
Quickstart Build your first knowledge graph
Core Concepts Learn about episodes, nodes, and edges
Search Guide Master hybrid search and reranking
Examples Explore example projects