System Requirements
Before installing PentAGI, ensure your system meets these requirements:
Minimum Requirements
Operating System : Linux, macOS, or Windows
CPU : 2 vCPU minimum (4+ vCPU recommended)
RAM : 4GB minimum (8GB+ recommended for optimal performance)
Storage : 20GB free disk space
Software : Docker and Docker Compose
Network : Internet access for downloading images and API access
LLM Provider Requirements
You need at least one of the following:
OpenAI API Key - Access to GPT-4, GPT-5, o-series models
Anthropic API Key - Access to Claude 3.5, Claude 4 models
Google AI API Key - Access to Gemini 2.0, 2.5 models
AWS Bedrock Access - AWS credentials with Bedrock permissions
Ollama Instance - Local installation for zero-cost inference
AWS Bedrock Rate Limits : Default rate limits are extremely restrictive (2 requests/minute for Claude Sonnet 4). Request quota increases before using Bedrock in production.
Installation Methods
PentAGI offers three installation methods:
Interactive Installer (Recommended) - Guided setup with terminal UI
Manual Installation - Direct Docker Compose setup
Two-Node Architecture - Distributed setup for production environments
Method 1: Interactive Installer (Recommended)
The interactive installer provides a streamlined setup experience with system checks, provider configuration, and security hardening.
Quick Installation (Linux amd64)
Create Installation Directory
mkdir -p pentagi && cd pentagi
Download Installer
wget -O installer.zip https://pentagi.com/downloads/linux/amd64/installer-latest.zip
Run Interactive Installer
The installer requires Docker API access. Choose one of these options: Option 1 (Recommended for production): Option 2 (Development environments): # Add your user to docker group
sudo usermod -aG docker $USER
newgrp docker
# Verify Docker access
docker ps
# Run installer
./installer
Adding a user to the docker group grants root-equivalent privileges. Only do this for trusted users in controlled environments.
What the Installer Does
The interactive installer guides you through:
System Checks
Verifies Docker and Docker Compose installation
Tests network connectivity
Validates system requirements
Environment Setup
Creates and configures .env file
Sets optimal defaults for your system
Generates secure random credentials
Provider Configuration
OpenAI (GPT-4, GPT-5, o-series)
Anthropic (Claude 3.5, Claude 4)
Google AI (Gemini 2.0, 2.5)
AWS Bedrock (multi-provider access)
Ollama (local inference)
Custom providers (OpenRouter, DeepInfra, DeepSeek, Moonshot)
Search Engine Setup
DuckDuckGo (no API key required)
Google Custom Search
Tavily AI
Traversaal AI
Perplexity AI
Searxng meta search
Security Hardening
Generates secure passwords
Creates SSL certificates
Sets cookie signing salt
Configures database credentials
Deployment
Downloads Docker images
Starts PentAGI services
Validates deployment
For Production Environments
For production or security-sensitive deployments, use a two-node architecture where worker operations are isolated:
Main Node : Runs PentAGI core services, UI, and databases
Worker Node : Executes penetration testing containers in isolation
Benefits:
Isolated execution environment
Separate network boundaries
Docker-in-Docker with TLS authentication
Support for out-of-band (OOB) attack techniques
Worker Node Setup Guide Detailed instructions for distributed two-node architecture
Method 2: Manual Installation
For users who prefer direct control over the installation process.
Create Working Directory
mkdir pentagi && cd pentagi
Download Environment File
curl -o .env https://raw.githubusercontent.com/vxcontrol/pentagi/master/.env.example
Download Provider Configuration Files
curl -o example.custom.provider.yml https://raw.githubusercontent.com/vxcontrol/pentagi/master/examples/configs/custom-openai.provider.yml
curl -o example.ollama.provider.yml https://raw.githubusercontent.com/vxcontrol/pentagi/master/examples/configs/ollama-llama318b.provider.yml
Configure LLM Providers
Edit .env and add at least one LLM provider: # Required: At least one LLM provider
OPEN_AI_KEY = your_openai_key
ANTHROPIC_API_KEY = your_anthropic_key
GEMINI_API_KEY = your_gemini_key
# Optional: AWS Bedrock
BEDROCK_REGION = us-east-1
BEDROCK_ACCESS_KEY_ID = your_aws_access_key
BEDROCK_SECRET_ACCESS_KEY = your_aws_secret_key
# Optional: Ollama (local inference)
OLLAMA_SERVER_URL = http://localhost:11434
OLLAMA_SERVER_MODEL = llama3.1:8b-instruct-q8_0
Configure Search Engines (Optional)
# DuckDuckGo (no API key required)
DUCKDUCKGO_ENABLED = true
# Google Custom Search
GOOGLE_API_KEY = your_google_key
GOOGLE_CX_KEY = your_google_cx
# Tavily AI
TAVILY_API_KEY = your_tavily_key
# Traversaal AI
TRAVERSAAL_API_KEY = your_traversaal_key
# Perplexity AI
PERPLEXITY_API_KEY = your_perplexity_key
PERPLEXITY_MODEL = sonar-pro
PERPLEXITY_CONTEXT_SIZE = medium
# Searxng meta search
SEARXNG_URL = http://your-searxng-instance:8080
SEARXNG_CATEGORIES = general
Configure Security Settings
Update security-related variables: # Main Security Settings
COOKIE_SIGNING_SALT = your_random_salt_value
PUBLIC_URL = https://pentagi.example.com
# SSL Certificates (optional)
SERVER_SSL_CRT = /path/to/your/cert.crt
SERVER_SSL_KEY = /path/to/your/cert.key
# Database Credentials
PENTAGI_POSTGRES_USER = pentagi_user
PENTAGI_POSTGRES_PASSWORD = your_secure_password
PENTAGI_POSTGRES_DB = pentagidb
# Neo4j Credentials (for Graphiti)
NEO4J_USER = neo4j
NEO4J_PASSWORD = your_neo4j_password
Configure Knowledge Graph (Optional)
Enable Graphiti for semantic relationship tracking: GRAPHITI_ENABLED = true
GRAPHITI_TIMEOUT = 30
GRAPHITI_URL = http://graphiti:8000
GRAPHITI_MODEL_NAME = gpt-5-mini
NEO4J_USER = neo4j
NEO4J_DATABASE = neo4j
NEO4J_PASSWORD = your_secure_password
NEO4J_URI = bolt://neo4j:7687
Remove Inline Comments (Optional)
If using .env as envFile in VSCode or IDEs: perl -i -pe 's/\s+#.*$//' .env
Download Docker Compose File
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose.yml
Launch PentAGI
Start the core services: Or with all optional services: # Download optional compose files
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-langfuse.yml
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-graphiti.yml
curl -O https://raw.githubusercontent.com/vxcontrol/pentagi/master/docker-compose-observability.yml
# Start all services
docker compose -f docker-compose.yml \
-f docker-compose-langfuse.yml \
-f docker-compose-graphiti.yml \
-f docker-compose-observability.yml \
up -d
Access Web Interface
Open your browser and navigate to: Default credentials: Change the default password immediately after first login!
Advanced Configuration
Ollama Local Inference
For zero-cost local inference using Ollama:
Install Ollama
Install Ollama on your host system: curl -fsSL https://ollama.ai/install.sh | sh
Create Custom Model with Extended Context
PentAGI requires models with larger context windows. Create a custom model: For Llama 3.1 8B: FROM llama3.1:8b-instruct-q8_0
PARAMETER num_ctx 110000
PARAMETER temperature 0.7
PARAMETER top_p 0.9
ollama create llama3.1:8b-instruct-tc -f Modelfile_llama318b
For Qwen3 32B: FROM qwen3:32b-fp16
PARAMETER num_ctx 110000
PARAMETER temperature 0.3
PARAMETER top_p 0.8
ollama create qwen3:32b-fp16-tc -f Modelfile_qwen332b
The num_ctx parameter can only be set during model creation via Modelfile. It cannot be changed after creation or overridden at runtime.
Configure PentAGI
Update .env file: OLLAMA_SERVER_URL = http://host.docker.internal:11434
OLLAMA_SERVER_MODEL = llama3.1:8b-instruct-tc
OLLAMA_SERVER_CONFIG_PATH = /opt/pentagi/conf/ollama-llama318b.provider.yml
OLLAMA_SERVER_PULL_MODELS_ENABLED = false
OLLAMA_SERVER_LOAD_MODELS_ENABLED = false
Custom LLM Provider Configuration
For OpenRouter, DeepInfra, DeepSeek, Moonshot, or other OpenAI-compatible APIs:
# Custom Provider Configuration
LLM_SERVER_URL = https://openrouter.ai/api/v1
LLM_SERVER_KEY = your_api_key
LLM_SERVER_MODEL = # Leave empty
LLM_SERVER_CONFIG_PATH = /opt/pentagi/conf/openrouter.provider.yml
LLM_SERVER_PROVIDER = # For LiteLLM proxy
LLM_SERVER_LEGACY_REASONING = false # Set true for OpenAI
LLM_SERVER_PRESERVE_REASONING = false # Set true for Moonshot
Docker Image Configuration
Control which Docker images are used for task execution:
# Default image for general tasks
DOCKER_DEFAULT_IMAGE = debian:latest
# Default image for penetration testing
DOCKER_DEFAULT_IMAGE_FOR_PENTEST = vxcontrol/kali-linux
OAuth Integration
Enable GitHub and Google authentication:
# GitHub OAuth
OAUTH_GITHUB_CLIENT_ID = your_github_client_id
OAUTH_GITHUB_CLIENT_SECRET = your_github_client_secret
# Google OAuth
OAUTH_GOOGLE_CLIENT_ID = your_google_client_id
OAUTH_GOOGLE_CLIENT_SECRET = your_google_client_secret
Proxy Configuration
For networks requiring proxy access:
# Global proxy for all LLM providers and external APIs
PROXY_URL = http://your-proxy:8080
SSL/TLS Certificate Configuration
For custom CA certificates:
# Path to CA certificate bundle (inside container)
EXTERNAL_SSL_CA_PATH = /opt/pentagi/ssl/ca-bundle.pem
EXTERNAL_SSL_INSECURE = false
Place your CA bundle in the host directory:
cp ca-bundle.pem ./pentagi-ssl/
Service Ports
Default ports used by PentAGI services:
Service Port Description PentAGI Web UI 8443 Main web interface (HTTPS) PostgreSQL 5432 Vector database Web Scraper 9443 Browser automation service Langfuse UI 4000 LLM observability dashboard Grafana 3000 System monitoring Neo4j Browser 7474 Knowledge graph interface
All ports can be customized via environment variables in .env.
Verifying Installation
Check Service Status
All services should show status Up.
View Logs
docker compose logs -f pentagi
Look for startup messages indicating successful initialization.
Test LLM Connection
Run the LLM tester utility: docker exec -it pentagi /opt/pentagi/bin/ctester -verbose
This validates your LLM provider configuration.
Test Web Interface
Navigate to https://localhost:8443
Login with default credentials
Create a new assistant
Send a test message
Troubleshooting
If you see network creation errors: # Create networks manually
docker network create pentagi-network
docker network create observability-network
docker network create langfuse-network
# Then restart services
docker compose up -d
Permission denied on docker.sock
For Production: sudo docker compose up -d
For Development: sudo usermod -aG docker $USER
newgrp docker
Database connection errors
Ensure PostgreSQL is fully started: docker compose logs pgvector
docker compose restart pentagi
Verify API keys are correct: docker exec -it pentagi /opt/pentagi/bin/ctester -verbose
Check for rate limits, invalid keys, or network issues.
For custom CA certificates:
Place bundle in ./pentagi-ssl/ca-bundle.pem
Set EXTERNAL_SSL_CA_PATH=/opt/pentagi/ssl/ca-bundle.pem
Restart: docker compose restart pentagi
Next Steps
Configuration Guide Detailed configuration options for all services
LLM Provider Setup Configure and optimize LLM providers
Testing Utilities Validate and test your configuration
Architecture Understanding PentAGI’s architecture
Security Best Practices:
Change all default passwords
Use strong random values for all secrets
Configure SSL certificates for production
Never expose services directly to the internet
Review and restrict Docker permissions
Keep Docker images updated