Installation
Install Graphiti with Anthropic support:Configuration
Environment Variables
.env
Basic Setup
Initialize Graphiti with Anthropic:Supported Models
Claude 4.5 Models
- claude-haiku-4-5-latest (recommended): Fast, cost-effective, 64K output tokens
- claude-sonnet-4-5-latest: Balanced performance, 64K output tokens
- claude-sonnet-4-5-20250929: Specific version of Sonnet 4.5
Claude 3.7 Models
- claude-3-7-sonnet-latest: Advanced reasoning, 64K output tokens
- claude-3-7-sonnet-20250219: Specific version
Claude 3.5 Models
- claude-3-5-haiku-latest: Fast, 8K output tokens
- claude-3-5-sonnet-latest: Balanced, 8K output tokens
- claude-3-5-haiku-20241022: Specific version
- claude-3-5-sonnet-20241022: Specific version
Legacy Models
- claude-3-opus-latest: Highest capability, 4K output tokens
- claude-3-sonnet-20240229: Previous generation
- claude-3-haiku-20240307: Previous generation
Model Selection
Graphiti uses two models:- Primary model: For complex entity extraction and relationship detection
- Small model: For simpler classification tasks
Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | From env | Anthropic API key |
model | str | "claude-haiku-4-5-latest" | Primary LLM model |
small_model | str | Same as model | Model for simpler tasks |
temperature | float | 0.7 | Sampling temperature (0-1) |
max_tokens | int | Model-specific | Maximum tokens to generate |
Maximum Output Tokens
Anthropic models have different max output token limits:| Model Family | Max Output Tokens |
|---|---|
| Claude 4.5 | 65,536 (64K) |
| Claude 3.7 | 65,536 (64K) |
| Claude 3.5 | 8,192 (8K) |
| Claude 3 | 4,096 (4K) |
| Claude 2 | 4,096 (4K) |
Structured Output
Anthropic doesn’t have native structured output like OpenAI. Graphiti uses a tool-based approach to ensure valid JSON responses:- More reliable structured outputs
- Automatic retry on validation errors
- Graceful fallback handling
Complete Example
Error Handling
Graphiti automatically handles:- Rate Limit Errors: Exponential backoff and retry
- Content Policy Violations: Converted to
RefusalError(no retry) - API Errors: Automatic retry with error context
- Validation Errors: Retry with schema hints
Rate Limiting
Adjust concurrency to avoid rate limits:.env
When to Use Anthropic
Choose Anthropic if you:- Need extended context windows (200K tokens for Claude 3)
- Want strong reasoning and analysis capabilities
- Prefer Claude’s conversational style
- Need specific safety and content filtering
- Need native structured output support
- Want the latest GPT-5 reasoning models
- Prefer function calling over tool use
- Need faster response times
Cost Optimization
- Use Haiku Models: Claude Haiku is cost-effective for most tasks
- Batch Operations: Process multiple items together
- Token Limits: Set appropriate
max_tokensfor your use case - Model Selection: Use cheaper models for simpler tasks
Best Practices
- Start with Haiku 4.5: Best cost/performance ratio
- Use Sonnet for Complex Tasks: When you need deeper reasoning
- Monitor Token Usage: Track costs via Anthropic dashboard
- Set Appropriate Limits: Configure
max_tokensbased on task complexity - Handle Refusals: Catch
RefusalErrorfor content policy violations