Skip to main content
Langfuse provides advanced LLM observability, analytics, and performance monitoring for PentAGI’s AI agents. Track every interaction, analyze token usage, debug issues, and optimize model performance in real-time.

Overview

Langfuse is a comprehensive observability platform designed specifically for LLM applications. It captures and analyzes:
  • Agent Interactions: Complete traces of AI agent conversations and decision-making
  • Token Analytics: Detailed token usage and cost tracking across all models
  • Performance Metrics: Response times, latency, and throughput analysis
  • Model Comparison: Side-by-side comparison of different LLM providers
  • Error Tracking: Comprehensive error logging and debugging information

Architecture

The Langfuse stack consists of several components:
  • Langfuse Web: Frontend UI for visualization and analysis (port 4000)
  • Langfuse Worker: Background processing for analytics and ingestion
  • PostgreSQL: Primary database for metadata and traces
  • ClickHouse: High-performance analytics database for metrics
  • Redis: Caching and rate limiting
  • MinIO: S3-compatible storage for event logs and media

Setup

1

Configure Environment Variables

Edit your .env file with Langfuse settings:
.env
# Enable Langfuse integration
LANGFUSE_BASE_URL=http://langfuse-web:3000
LANGFUSE_PROJECT_ID=cm47619l0000872mcd2dlbqwb
LANGFUSE_PUBLIC_KEY=pk-lf-5946031c-ae6c-4451-98d2-9882a59e1707
LANGFUSE_SECRET_KEY=sk-lf-d9035680-89dd-4950-8688-7870720bf359

# Database credentials
LANGFUSE_POSTGRES_USER=postgres
LANGFUSE_POSTGRES_PASSWORD=postgres
LANGFUSE_POSTGRES_DB=langfuse

# Security settings (CHANGE THESE!)
LANGFUSE_SALT=myglobalsalt
LANGFUSE_ENCRYPTION_KEY=0000000000000000000000000000000000000000000000000000000000000000
LANGFUSE_NEXTAUTH_SECRET=mysecret

# Admin credentials
LANGFUSE_INIT_USER_EMAIL=[email protected]
LANGFUSE_INIT_USER_NAME=admin
LANGFUSE_INIT_USER_PASSWORD=P3nTagIsD0d

# ClickHouse settings
LANGFUSE_CLICKHOUSE_USER=clickhouse
LANGFUSE_CLICKHOUSE_PASSWORD=clickhouse

# Redis settings
LANGFUSE_REDIS_AUTH=myredissecret

# S3/MinIO settings
LANGFUSE_S3_ACCESS_KEY_ID=minio
LANGFUSE_S3_SECRET_ACCESS_KEY=miniosecret
Security: Change all default passwords and keys before deploying to production. Generate a secure encryption key with: openssl rand -hex 32
2

Download Docker Compose File

Download the Langfuse Docker Compose configuration:
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-langfuse.yml
3

Start Langfuse Stack

Launch Langfuse alongside PentAGI:
docker compose -f docker-compose.yml -f docker-compose-langfuse.yml up -d
Verify services are running:
docker compose ps langfuse-web langfuse-worker
4

Access Langfuse UI

Open your browser and navigate to:
http://localhost:4000
Login with credentials from your .env file:
  • Email: LANGFUSE_INIT_USER_EMAIL
  • Password: LANGFUSE_INIT_USER_PASSWORD

Configuration

Environment Variables

Key configuration options for Langfuse:
VariableDescriptionDefault
LANGFUSE_LISTEN_PORTWeb UI port4000
LANGFUSE_NEXTAUTH_URLPublic URL for authenticationhttp://localhost:4000
LANGFUSE_SALTSalt for hashingmyglobalsalt
LANGFUSE_ENCRYPTION_KEYEncryption key (32 bytes hex)Required
LANGFUSE_TELEMETRY_ENABLEDEnable usage telemetryfalse
LANGFUSE_READ_FROM_CLICKHOUSE_ONLYUse ClickHouse for readstrue
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURESEnable beta featurestrue

OpenTelemetry Integration

To integrate Langfuse with the observability stack, enable OTLP export:
.env
LANGFUSE_OTEL_EXPORTER_OTLP_ENDPOINT=http://otelcol:4318
LANGFUSE_OTEL_SERVICE_NAME=langfuse

Storage Configuration

Langfuse uses MinIO for S3-compatible storage:
.env
# S3 bucket configuration
LANGFUSE_S3_BUCKET=langfuse
LANGFUSE_S3_REGION=auto
LANGFUSE_S3_ENDPOINT=http://langfuse-minio:9000
LANGFUSE_S3_FORCE_PATH_STYLE=true

# Batch export settings
LANGFUSE_S3_BATCH_EXPORT_ENABLED=true

Usage

Viewing Traces

  1. Navigate to the Traces tab in Langfuse UI
  2. Browse recent AI agent interactions
  3. Click on a trace to see detailed conversation flow
  4. Examine tool calls, model responses, and token usage

Analyzing Performance

  1. Go to the Analytics dashboard
  2. Review token consumption by model and agent type
  3. Identify slow requests and optimize accordingly
  4. Track costs across different LLM providers

Debugging Issues

  1. Use Filters to isolate problematic traces
  2. Search by error messages or status codes
  3. Review full request/response payloads
  4. Export traces for offline analysis

Features

LLM Tracing

Automatic capture of:
  • User prompts and system messages
  • Model completions and reasoning
  • Function/tool calls and results
  • Token counts and costs
  • Latency and performance metrics

Analytics Dashboard

Real-time insights including:
  • Token usage trends over time
  • Cost analysis by model and agent
  • Request volume and throughput
  • Error rates and failure patterns
  • Model performance comparison

Prompt Management

Version control for prompts:
  • Store and track prompt templates
  • Compare different prompt versions
  • A/B test prompt variations
  • Roll back to previous versions

Dataset Evaluation

Test and validate LLM outputs:
  • Create evaluation datasets
  • Run batch evaluations
  • Track model accuracy over time
  • Compare results across models

Services

Langfuse Web

The main web interface running on port 4000:
docker-compose-langfuse.yml
langfuse-web:
  image: langfuse/langfuse:3
  ports:
    - "127.0.0.1:4000:3000"
  environment:
    DATABASE_URL: postgresql://postgres:postgres@langfuse-postgres:5432/langfuse
    CLICKHOUSE_URL: http://langfuse-clickhouse:8123

Langfuse Worker

Background processing service:
docker-compose-langfuse.yml
langfuse-worker:
  image: langfuse/langfuse-worker:3
  depends_on:
    - postgres
    - clickhouse
    - redis
    - minio

PostgreSQL Database

Stores traces and metadata:
docker-compose-langfuse.yml
postgres:
  image: postgres:16
  volumes:
    - langfuse-postgres-data:/var/lib/postgresql/data
  environment:
    POSTGRES_DB: langfuse

ClickHouse Database

High-performance analytics storage:
docker-compose-langfuse.yml
clickhouse:
  image: clickhouse/clickhouse-server:24
  volumes:
    - langfuse-clickhouse-data:/var/lib/clickhouse

Troubleshooting

Connection Issues

If PentAGI cannot connect to Langfuse:
# Check Langfuse services are running
docker compose ps | grep langfuse

# Verify network connectivity
docker exec pentagi ping langfuse-web

# Check Langfuse logs
docker compose logs langfuse-web

Performance Issues

If Langfuse UI is slow:
  1. Check ClickHouse is running properly:
    docker compose logs langfuse-clickhouse
    
  2. Verify database migrations completed:
    docker compose logs langfuse-worker | grep migration
    
  3. Increase resource limits in docker-compose-langfuse.yml

Data Not Appearing

If traces are not showing up:
  1. Verify PentAGI configuration:
    docker exec pentagi env | grep LANGFUSE
    
  2. Check worker is processing events:
    docker compose logs -f langfuse-worker
    
  3. Ensure API keys match between PentAGI and Langfuse

Best Practices

Security

  • Change all default passwords immediately
  • Use strong encryption keys (32 bytes minimum)
  • Enable TLS/SSL for production deployments
  • Restrict database access to internal networks
  • Regularly rotate API keys and secrets

Performance

  • Configure Redis for optimal caching
  • Tune ClickHouse batch write settings
  • Use connection pooling for high throughput
  • Monitor disk usage for PostgreSQL and ClickHouse
  • Archive old traces to object storage

Monitoring

  • Set up alerts for high error rates
  • Track token usage against budget limits
  • Monitor database performance metrics
  • Review cost trends regularly
  • Audit access logs periodically

Build docs developers (and LLMs) love