Introduction
PentAGI uses environment variables for all configuration settings. The.env.example file in the repository contains all available configuration options with inline documentation. This page provides an overview of the configuration structure and key settings.
Configuration File Structure
PentAGI’s configuration is organized into several logical sections:Core Settings
Installation ID, license keys, and server configuration
LLM Providers
OpenAI, Anthropic, Gemini, Bedrock, Ollama, and custom providers
Search Engines
DuckDuckGo, Google, Tavily, Traversaal, Perplexity, and Searxng
Security
SSL/TLS, authentication, OAuth, and secrets management
Observability
Langfuse, Grafana, OpenTelemetry, and monitoring
Setting Up Configuration
Option 1: Using the Installer (Recommended)
The interactive installer automatically creates and configures your.env file:
- Verify system requirements
- Create
.envfile with optimal defaults - Configure LLM providers and search engines
- Generate secure credentials
- Set up SSL certificates
Option 2: Manual Configuration
- Copy the example file:
- Download provider configuration examples:
-
Edit the
.envfile to add your API keys and configure settings - Remove inline comments if using as an envFile in IDEs:
Core Settings
These are the foundational configuration options for PentAGI:Installation identifier for PentAGI Cloud API communication
License key for PentAGI Cloud API access
Allow PentAGI to interact with users during task execution
Global HTTP proxy URL for all LLM providers and external systems (for network isolation)Example:
http://proxy.example.com:8080Server Configuration
Configure how the PentAGI server listens and serves requests:Docker Compose Settings
IP address for PentAGI to bind to on the host machine
Port for PentAGI to listen on (host machine)
Directory path or volume name for persistent data storage
Directory path or volume name for SSL certificates
Path to Docker socket on the host machine
Path to Docker TLS certificates on host for remote Docker connections
Internal Container Settings
Port the server listens on inside the container
Host address the server binds to inside the container
Enable HTTPS for the web server
Path to custom SSL certificate file inside the container
Path to custom SSL private key file inside the container
Directory for static files inside the container
URL path prefix for serving static files
Database Configuration
PentAGI uses PostgreSQL with pgvector for persistent storage:PostgreSQL username
PostgreSQL password
PostgreSQL database name
IP address for PostgreSQL to bind to on host
PostgreSQL port on host machine
Scraper Configuration
The scraper service provides isolated browser automation:Public URL for scraper service (for external/public targets)
Private URL for scraper service (for internal/local targets with authentication)
Username for scraper basic authentication
Password for scraper basic authentication
Maximum number of concurrent browser sessions
IP address for scraper to bind to on host
Scraper HTTPS port on host machine
Docker Execution Settings
Configure how PentAGI manages Docker containers for task execution:Docker Client Configuration
Docker host connection (socket or TCP)Examples:
unix:///var/run/docker.sock- Local sockettcp://remote-host:2376- Remote Docker over TLS
Enable TLS verification for remote Docker connections
Path to Docker TLS certificates inside the container
Container Execution Settings
Enable Docker socket access inside containers
Enable
NET_ADMIN capability for network operationsPath to Docker socket on host (for mounting into containers)
Default Docker network for created containers
Default working directory inside containers
Public IP address of host machine for port bindings
Default Docker image for general tasks
Default Docker image specifically for penetration testing tasks
Assistant Configuration
Control default behavior for AI assistants:Default value for agent delegation when creating new assistants
false: New assistants start without agent delegationtrue: New assistants start with agent delegation enabled
Users can override this setting in the UI when creating/editing assistants
Embedding Configuration
Configure vector embeddings for semantic search:URL for embedding service API endpoint
API key for embedding service authentication
Model name to use for generating embeddings
Embedding provider identifier (e.g.,
openai, cohere)Number of texts to embed in a single batch request
Remove newline characters from text before embedding
Summarizer Configuration
Control context window management and conversation summarization:Global Summarizer Settings
Keep all messages in the last section intact without summarization
Use question-answer pair summarization strategy
Summarize human messages within QA pairs
Maximum byte size for the last section (50KB)
Maximum byte size for a single body pair (16KB)
Maximum number of QA pair sections to preserve
Maximum byte size for QA pair sections (64KB)
Number of recent QA sections to keep without summarization
Assistant Summarizer Settings
Preserve all messages in assistant’s last section
Maximum byte size for assistant’s last section (75KB)
Maximum byte size for a single body pair in assistant context (16KB)
Maximum QA sections to preserve in assistant context
Maximum byte size for assistant’s QA sections (75KB)
Number of recent QA sections to preserve without summarization
Graphiti Knowledge Graph
Optional knowledge graph integration for semantic understanding:Enable Graphiti knowledge graph integration
Timeout in seconds for Graphiti API requests
Graphiti service URL
LLM model name for entity extraction (e.g.,
gpt-5-mini)Neo4j database username (used by Graphiti)
Neo4j database name
Neo4j database password
Neo4j connection URI
Next Steps
Configure LLM Providers
Set up OpenAI, Anthropic, Gemini, Bedrock, or Ollama
Configure Search Engines
Enable DuckDuckGo, Google, Tavily, and more
Security Settings
Configure SSL, authentication, and secrets
Set Up Observability
Enable Langfuse, Grafana, and OpenTelemetry