Skip to main content

Overview

PentAGI uses Docker Compose for streamlined deployment with multiple optional stacks:
  • Main Stack (docker-compose.yml) - Core PentAGI services
  • Langfuse Stack (docker-compose-langfuse.yml) - LLM observability and analytics
  • Graphiti Stack (docker-compose-graphiti.yml) - Knowledge graph integration
  • Observability Stack (docker-compose-observability.yml) - Monitoring and metrics

System Requirements

CPU

Minimum 2 vCPU cores

Memory

Minimum 4GB RAM

Storage

20GB free disk space

Network

Internet access for images

Quick Start

1

Create working directory

mkdir pentagi && cd pentagi
2

Download environment file

curl -o .env https://raw.githubusercontent.com/vxcontrol/pentagi/master/.env.example
3

Download provider examples

curl -o example.custom.provider.yml https://raw.githubusercontent.com/vxcontrol/pentagi/master/examples/configs/custom-openai.provider.yml
curl -o example.ollama.provider.yml https://raw.githubusercontent.com/vxcontrol/pentagi/master/examples/configs/ollama-llama318b.provider.yml
4

Configure LLM providers

Edit .env file and set at least one LLM provider:
# Required: At least one LLM provider
OPEN_AI_KEY=your_openai_key
# OR
ANTHROPIC_API_KEY=your_anthropic_key
# OR
GEMINI_API_KEY=your_gemini_key
5

Download and start main stack

curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose.yml
docker compose up -d
6

Access PentAGI

Visit https://localhost:8443Default credentials: [email protected] / admin
If you encounter errors about pentagi-network, observability-network, or langfuse-network, run docker-compose.yml first to create these networks before starting optional stacks.

Main Stack Services

The main docker-compose.yml includes:

Core Services

ServiceImagePortDescription
pentagivxcontrol/pentagi:latest8443Main application server
pgvectorvxcontrol/pgvector:latest5432PostgreSQL with vector extension
scrapervxcontrol/scraper:latest9443Web scraping service
pgexporterquay.io/prometheuscommunity/postgres-exporter9187Postgres metrics exporter

Networks

networks:
  pentagi-network:
    driver: bridge
  observability-network:
    driver: bridge
  langfuse-network:
    driver: bridge

Volumes

volumes:
  pentagi-data:          # Application data
  pentagi-ssl:           # SSL certificates
  scraper-ssl:           # Scraper certificates
  pentagi-postgres-data: # Database storage

Langfuse Stack

Langfuse provides advanced LLM observability and analytics.
1

Configure Langfuse credentials

Edit .env file:
# Langfuse Database
LANGFUSE_POSTGRES_USER=postgres
LANGFUSE_POSTGRES_PASSWORD=changeme
LANGFUSE_CLICKHOUSE_USER=clickhouse
LANGFUSE_CLICKHOUSE_PASSWORD=changeme

# Langfuse Security
LANGFUSE_SALT=random_salt_string
LANGFUSE_ENCRYPTION_KEY=$(openssl rand -hex 32)
LANGFUSE_NEXTAUTH_SECRET=random_secret

# Langfuse Admin
LANGFUSE_INIT_USER_EMAIL=[email protected]
LANGFUSE_INIT_USER_PASSWORD=secure_password

# Langfuse API Keys
LANGFUSE_INIT_PROJECT_PUBLIC_KEY=pk-lf-$(uuidgen)
LANGFUSE_INIT_PROJECT_SECRET_KEY=sk-lf-$(uuidgen)

# S3 Storage
LANGFUSE_S3_ACCESS_KEY_ID=minio
LANGFUSE_S3_SECRET_ACCESS_KEY=miniosecret

# Redis
LANGFUSE_REDIS_AUTH=myredissecret
2

Enable Langfuse integration

Add to .env:
LANGFUSE_BASE_URL=http://langfuse-web:3000
LANGFUSE_PROJECT_ID=${LANGFUSE_INIT_PROJECT_ID}
LANGFUSE_PUBLIC_KEY=${LANGFUSE_INIT_PROJECT_PUBLIC_KEY}
LANGFUSE_SECRET_KEY=${LANGFUSE_INIT_PROJECT_SECRET_KEY}
3

Download and start Langfuse stack

curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-langfuse.yml
docker compose -f docker-compose.yml -f docker-compose-langfuse.yml up -d
4

Access Langfuse UI

Visit http://localhost:4000Use credentials from LANGFUSE_INIT_USER_EMAIL / LANGFUSE_INIT_USER_PASSWORD

Langfuse Services

ServiceImagePortDescription
langfuse-weblangfuse/langfuse:34000Web interface
langfuse-workerlangfuse/langfuse-worker:33030Background worker
postgrespostgres:16-Langfuse database
clickhouseclickhouse/clickhouse-server:24-Analytics database
redisredis:7-Cache and queue
miniominio/minio-S3-compatible storage

Graphiti Stack

Graphiti provides temporal knowledge graph capabilities powered by Neo4j.
1

Configure Graphiti

Edit .env file:
# Enable Graphiti
GRAPHITI_ENABLED=true
GRAPHITI_TIMEOUT=30
GRAPHITI_URL=http://graphiti:8000
GRAPHITI_MODEL_NAME=gpt-5-mini

# Neo4j Configuration
NEO4J_USER=neo4j
NEO4J_DATABASE=neo4j
NEO4J_PASSWORD=secure_neo4j_password
NEO4J_URI=bolt://neo4j:7687

# Required: OpenAI key for entity extraction
OPEN_AI_KEY=your_openai_api_key
2

Download and start Graphiti stack

curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-graphiti.yml
docker compose -f docker-compose.yml -f docker-compose-graphiti.yml up -d
3

Verify Graphiti is running

# Check service health
docker compose -f docker-compose.yml -f docker-compose-graphiti.yml ps graphiti neo4j

# View Graphiti logs
docker compose -f docker-compose.yml -f docker-compose-graphiti.yml logs -f graphiti
4

Access Neo4j Browser (optional)

Visit http://localhost:7474Login with NEO4J_USER / NEO4J_PASSWORD

Graphiti Services

ServiceImagePortDescription
graphitivxcontrol/graphiti:latest8000Knowledge graph API
neo4jneo4j:5.26.27474, 7687Graph database
Graphiti automatically extracts and stores structured knowledge from agent interactions, building a graph of entities, relationships, and temporal context.

Observability Stack

Comprehensive monitoring with Grafana, Prometheus, Jaeger, and Loki.
1

Enable OpenTelemetry integration

Edit .env file:
# Enable observability
OTEL_HOST=otelcol:8148
For Langfuse integration:
LANGFUSE_OTEL_EXPORTER_OTLP_ENDPOINT=http://otelcol:4318
2

Download and start observability stack

curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-observability.yml
docker compose -f docker-compose.yml -f docker-compose-observability.yml up -d
3

Access Grafana

Visit http://localhost:3000Default credentials: admin / admin

Observability Services

ServiceImagePortDescription
grafanagrafana/grafana:11.4.03000Visualization dashboards
victoriametricsvictoriametrics/victoria-metrics8428Metrics storage
jaegerjaegertracing/all-in-one:1.56.016686Distributed tracing
lokigrafana/loki:3.3.23100Log aggregation
otelotel/opentelemetry-collector-contrib8148, 4318Telemetry collector
clickstoreclickhouse/clickhouse-server:24-Trace storage
node-exporterprom/node-exporter9100Node metrics
cadvisorgcr.io/cadvisor/cadvisor8080Container metrics

Running All Stacks Together

To run all stacks simultaneously:
# Download all compose files
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose.yml
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-langfuse.yml
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-graphiti.yml
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-observability.yml

# Start all stacks
docker compose -f docker-compose.yml \
  -f docker-compose-langfuse.yml \
  -f docker-compose-graphiti.yml \
  -f docker-compose-observability.yml \
  up -d

# View all running services
docker compose -f docker-compose.yml \
  -f docker-compose-langfuse.yml \
  -f docker-compose-graphiti.yml \
  -f docker-compose-observability.yml \
  ps

Convenience Aliases

Add to your shell configuration:
alias pentagi="docker compose -f docker-compose.yml -f docker-compose-langfuse.yml -f docker-compose-graphiti.yml -f docker-compose-observability.yml"
alias pentagi-up="pentagi up -d"
alias pentagi-down="pentagi down"
alias pentagi-logs="pentagi logs -f"
alias pentagi-ps="pentagi ps"
Usage:
pentagi-up        # Start all services
pentagi-ps        # View service status
pentagi-logs      # Follow logs
pentagi-down      # Stop all services

Service Management

Start Services

# Start main stack
docker compose up -d

# Start with specific stack
docker compose -f docker-compose.yml -f docker-compose-langfuse.yml up -d

# Start in foreground (see logs)
docker compose up

Stop Services

# Stop all services
docker compose down

# Stop and remove volumes (⚠️ data loss)
docker compose down -v

View Logs

# All services
docker compose logs -f

# Specific service
docker compose logs -f pentagi

# Last 100 lines
docker compose logs --tail=100 pentagi

Restart Service

# Restart single service
docker compose restart pentagi

# Restart with rebuild
docker compose up -d --force-recreate pentagi

Update Images

# Pull latest images
docker compose pull

# Restart with new images
docker compose up -d

Environment Configuration

Security Variables

Change these default values before deploying to production!
# Main Security
COOKIE_SIGNING_SALT=random_salt_string
PUBLIC_URL=https://pentagi.example.com
SERVER_SSL_CRT=/path/to/cert.pem
SERVER_SSL_KEY=/path/to/key.pem

# Database Credentials
PENTAGI_POSTGRES_USER=postgres
PENTAGI_POSTGRES_PASSWORD=secure_password

# Neo4j Credentials
NEO4J_USER=neo4j
NEO4J_PASSWORD=secure_password

# Scraper Credentials
LOCAL_SCRAPER_USERNAME=someuser
LOCAL_SCRAPER_PASSWORD=secure_password

Network Configuration

# Listen on specific IP (default: 127.0.0.1)
PENTAGI_LISTEN_IP=0.0.0.0
PENTAGI_LISTEN_PORT=8443

# Scraper
SCRAPER_LISTEN_IP=0.0.0.0
SCRAPER_LISTEN_PORT=9443

# Database
PGVECTOR_LISTEN_IP=0.0.0.0
PGVECTOR_LISTEN_PORT=5432

Volume Paths

# Custom volume paths
PENTAGI_DATA_DIR=/opt/pentagi/data
PENTAGI_SSL_DIR=/opt/pentagi/ssl
PENTAGI_DOCKER_SOCKET=/var/run/docker.sock
PENTAGI_DOCKER_CERT_PATH=/opt/pentagi/docker/ssl

Health Checks

Check Service Status

# All services
docker compose ps

# Specific service health
docker inspect --format='{{.State.Health.Status}}' pentagi

Database Connection

# Connect to PostgreSQL
docker compose exec pgvector psql -U postgres -d pentagidb

# Check database size
docker compose exec pgvector psql -U postgres -d pentagidb -c "\l+"

Service Endpoints

# Test PentAGI API
curl -k https://localhost:8443/health

# Test Langfuse
curl http://localhost:4000/api/health

# Test Graphiti
curl http://localhost:8000/healthcheck

# Test Grafana
curl http://localhost:3000/api/health

Resource Limits

The docker-compose.yml files include resource constraints:
# Example: dind container in worker node setup
docker run -d \
  --cpus 2 \
  --memory 2G \
  --name docker-dind \
  --restart always \
  docker:dind
For production deployments, adjust resources based on workload:
services:
  pentagi:
    deploy:
      resources:
        limits:
          cpus: '4'
          memory: 8G
        reservations:
          cpus: '2'
          memory: 4G

Next Steps

Worker Node Setup

Deploy distributed architecture for production

Production Best Practices

Harden security and optimize performance

Troubleshooting

Common issues and solutions

Configuration

Complete environment variable reference

Build docs developers (and LLMs) love