Skip to main content
The Graphiti REST API service provides HTTP endpoints for interacting with the knowledge graph, built with FastAPI for high performance and automatic API documentation.

Overview

The FastAPI server (graphiti/server) implements the core Graphiti operations as REST endpoints, making it easy to integrate Graphiti into web applications, microservices, and non-Python environments. Docker Image: zepai/graphiti Available tags:
  • latest - Latest stable release
  • 0.22.1 - Specific version (matches graphiti-core version)
Supported platforms: linux/amd64, linux/arm64

Quick Start

Using Docker Compose

The fastest way to get started is with Docker Compose:
version: '3.8'

services:
  graph:
    image: zepai/graphiti:latest
    ports:
      - "8000:8000"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - NEO4J_URI=bolt://neo4j:${NEO4J_PORT}
      - NEO4J_USER=${NEO4J_USER}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD}
  
  neo4j:
    image: neo4j:5.22.0
    ports:
      - "7474:7474"  # HTTP
      - "${NEO4J_PORT}:${NEO4J_PORT}"  # Bolt
    volumes:
      - neo4j_data:/data
    environment:
      - NEO4J_AUTH=${NEO4J_USER}/${NEO4J_PASSWORD}

volumes:
  neo4j_data:
Start the service:
docker compose up
The API will be available at http://localhost:8000

Environment Variables

Required environment variables:
OPENAI_API_KEY=your_openai_api_key
NEO4J_USER=neo4j
NEO4J_PASSWORD=your_neo4j_password
NEO4J_PORT=7687
Optional variables:
  • SEMAPHORE_LIMIT - Control concurrency (default: 10)
  • NEO4J_URI - Override default bolt://neo4j:7687

API Documentation

Once the service is running, interactive API documentation is available:
  • Swagger UI: http://localhost:8000/docs
  • ReDoc: http://localhost:8000/redoc
  • OpenAPI JSON: http://localhost:8000/openapi.json
These automatically generated docs include:
  • All available endpoints
  • Request/response schemas
  • Interactive “Try it out” functionality
  • Authentication requirements

Core Endpoints

Episode Management

Add Episode

POST /episodes
Content-Type: application/json

{
  "name": "Customer Conversation",
  "episode_body": "User wants to buy running shoes in size 10",
  "source": "text",
  "source_description": "Customer chat",
  "reference_time": "2024-03-15T10:30:00Z",
  "group_id": "customer-123"
}
Response:
{
  "episode_uuid": "e7d4f8a2-1234-5678-90ab-cdef12345678",
  "status": "processing"
}

Get Episodes

GET /episodes?group_id=customer-123&limit=10
Response:
{
  "episodes": [
    {
      "uuid": "e7d4f8a2-...",
      "name": "Customer Conversation",
      "created_at": "2024-03-15T10:30:00Z",
      "content": "User wants to buy running shoes..."
    }
  ],
  "total": 25
}

Search Operations

Search Facts (Entity Relationships)

POST /search/facts
Content-Type: application/json

{
  "query": "running shoes preferences",
  "group_id": "customer-123",
  "center_node_uuid": "n-abc123",
  "num_results": 10
}
Response:
{
  "results": [
    {
      "uuid": "edge-123",
      "fact": "Customer prefers Nike running shoes",
      "score": 0.92,
      "source_node": {"uuid": "n-customer", "name": "John"},
      "target_node": {"uuid": "n-brand", "name": "Nike"}
    }
  ]
}

Search Nodes

POST /search/nodes
Content-Type: application/json

{
  "query": "customer preferences",
  "group_id": "customer-123",
  "num_results": 5
}
Response:
{
  "results": [
    {
      "uuid": "n-pref-001",
      "name": "Shoe Size Preference",
      "summary": "Customer wears size 10 shoes",
      "score": 0.88,
      "labels": ["Preference"]
    }
  ]
}

Entity Operations

Get Entity Edge

GET /entities/edges/{edge_uuid}
Response:
{
  "uuid": "edge-123",
  "fact": "Customer prefers Nike running shoes",
  "created_at": "2024-03-15T10:30:00Z",
  "source_node_uuid": "n-customer",
  "target_node_uuid": "n-brand",
  "valid_at": "2024-03-15T10:30:00Z",
  "invalid_at": null
}

Delete Entity Edge

DELETE /entities/edges/{edge_uuid}

Graph Maintenance

Clear Graph

POST /graph/clear
Content-Type: application/json

{
  "group_id": "customer-123"
}
Response:
{
  "status": "success",
  "message": "Graph cleared for group_id: customer-123"
}

Health Check

GET /health
Response:
{
  "status": "healthy",
  "database": "connected",
  "version": "0.22.1"
}

Client Examples

Python Client

import requests
from datetime import datetime, timezone

class GraphitiClient:
    def __init__(self, base_url="http://localhost:8000"):
        self.base_url = base_url
    
    def add_episode(self, name, content, group_id):
        response = requests.post(
            f"{self.base_url}/episodes",
            json={
                "name": name,
                "episode_body": content,
                "source": "text",
                "source_description": "API client",
                "reference_time": datetime.now(timezone.utc).isoformat(),
                "group_id": group_id
            }
        )
        return response.json()
    
    def search_facts(self, query, group_id, num_results=10):
        response = requests.post(
            f"{self.base_url}/search/facts",
            json={
                "query": query,
                "group_id": group_id,
                "num_results": num_results
            }
        )
        return response.json()

# Usage
client = GraphitiClient()
episode = client.add_episode(
    name="User Preference",
    content="User loves blue running shoes",
    group_id="user-123"
)
print(f"Episode created: {episode['episode_uuid']}")

results = client.search_facts("running shoes", "user-123")
for result in results['results']:
    print(f"Fact: {result['fact']} (score: {result['score']})")

JavaScript/TypeScript Client

class GraphitiClient {
  constructor(private baseUrl: string = 'http://localhost:8000') {}

  async addEpisode(name: string, content: string, groupId: string) {
    const response = await fetch(`${this.baseUrl}/episodes`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        name,
        episode_body: content,
        source: 'text',
        source_description: 'API client',
        reference_time: new Date().toISOString(),
        group_id: groupId
      })
    });
    return response.json();
  }

  async searchFacts(query: string, groupId: string, numResults: number = 10) {
    const response = await fetch(`${this.baseUrl}/search/facts`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ query, group_id: groupId, num_results: numResults })
    });
    return response.json();
  }
}

// Usage
const client = new GraphitiClient();
const episode = await client.addEpisode(
  'User Preference',
  'User loves blue running shoes',
  'user-123'
);
console.log(`Episode created: ${episode.episode_uuid}`);

const results = await client.searchFacts('running shoes', 'user-123');
results.results.forEach(r => {
  console.log(`Fact: ${r.fact} (score: ${r.score})`);
});

cURL Examples

# Add an episode
curl -X POST http://localhost:8000/episodes \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Customer Chat",
    "episode_body": "Customer wants size 10 Nike shoes",
    "source": "text",
    "group_id": "customer-123",
    "reference_time": "2024-03-15T10:30:00Z"
  }'

# Search facts
curl -X POST http://localhost:8000/search/facts \
  -H "Content-Type: application/json" \
  -d '{
    "query": "shoe preferences",
    "group_id": "customer-123",
    "num_results": 5
  }'

# Health check
curl http://localhost:8000/health

Production Deployment

Using Docker

# Pull the latest image
docker pull zepai/graphiti:latest

# Run with environment file
docker run -d \
  --name graphiti-api \
  -p 8000:8000 \
  --env-file .env \
  zepai/graphiti:latest

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: graphiti-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: graphiti-api
  template:
    metadata:
      labels:
        app: graphiti-api
    spec:
      containers:
      - name: graphiti
        image: zepai/graphiti:0.22.1
        ports:
        - containerPort: 8000
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: graphiti-secrets
              key: openai-api-key
        - name: NEO4J_URI
          value: "bolt://neo4j-service:7687"
        - name: NEO4J_USER
          value: "neo4j"
        - name: NEO4J_PASSWORD
          valueFrom:
            secretKeyRef:
              name: graphiti-secrets
              key: neo4j-password
        - name: SEMAPHORE_LIMIT
          value: "20"
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "2000m"
---
apiVersion: v1
kind: Service
metadata:
  name: graphiti-api-service
spec:
  selector:
    app: graphiti-api
  ports:
  - port: 80
    targetPort: 8000
  type: LoadBalancer

Environment Configuration

Create a .env file:
OPENAI_API_KEY=sk-...
NEO4J_URI=bolt://neo4j.production.internal:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=secure_password_here
NEO4J_PORT=7687
SEMAPHORE_LIMIT=15

Automated Releases

The FastAPI server container is automatically built and published when a new graphiti-core version is released to PyPI. Release workflow:
  1. Triggers when graphiti-core PyPI release completes
  2. Waits for PyPI package availability
  3. Builds multi-platform Docker image (amd64, arm64)
  4. Tags with version number and latest
  5. Pushes to Docker Hub
Only stable releases are built (pre-releases are skipped).

Monitoring

Health Checks

Implement health checks in your orchestration:
# Docker Compose
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 40s

Logging

The server uses structured logging. Configure log level:
LOG_LEVEL=INFO  # DEBUG, INFO, WARNING, ERROR

Metrics

FastAPI provides automatic metrics at /metrics (if prometheus client is installed).

Security

API Authentication

For production, implement authentication middleware:
from fastapi import Depends, HTTPException, Security
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials

security = HTTPBearer()

async def verify_token(credentials: HTTPAuthorizationCredentials = Security(security)):
    if credentials.credentials != os.getenv("API_TOKEN"):
        raise HTTPException(status_code=403, detail="Invalid token")
    return credentials.credentials

# Apply to endpoints
@app.post("/episodes", dependencies=[Depends(verify_token)])
async def add_episode(...):
    ...

Rate Limiting

Implement rate limiting with slowapi:
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address

limiter = Limiter(key_func=get_remote_address)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)

@app.post("/episodes")
@limiter.limit("10/minute")
async def add_episode(request: Request, ...):
    ...

Troubleshooting

Service Won’t Start

Check logs:
docker logs graphiti-api
Common issues:
  • Missing environment variables
  • Cannot connect to Neo4j
  • Port 8000 already in use

Slow Response Times

Solutions:
  • Increase SEMAPHORE_LIMIT for better concurrency
  • Scale horizontally (multiple instances)
  • Optimize Neo4j indices
  • Use connection pooling

Memory Issues

Solutions:
  • Reduce SEMAPHORE_LIMIT to lower memory usage
  • Increase container memory limits
  • Implement request size limits

Next Steps

Build docs developers (and LLMs) love