Environment Variables
Base URL for the custom LLM API endpoint (e.g.,
https://api.openrouter.ai/api/v1).API key for the custom LLM provider.
Default model to use (can be overridden in provider config).
Path to the YAML configuration file for agent-specific models.
Provider name prefix for model names (e.g.,
openrouter, deepseek for LiteLLM proxy).Controls reasoning format in API requests. Set to
true for legacy string-based reasoning_effort parameter.Preserve reasoning content in multi-turn conversations. Required by some providers (e.g., Moonshot).
Optional HTTP proxy URL for network isolation.
Configuration Examples
OpenRouter
DeepSeek
DeepInfra
Moonshot (with LiteLLM)
Custom vLLM Server
YAML Configuration Structure
The provider configuration file uses YAML format with the following structure:Supported Agent Types
| Agent Type | Purpose |
|---|---|
simple | Simple queries and basic analysis |
simple_json | Structured data extraction (JSON output) |
primary_agent | Core penetration testing workflows |
assistant | Multi-step security workflows |
generator | Report and exploit generation |
refiner | Result refinement and analysis |
adviser | Strategic recommendations |
reflector | Analysis review and critique |
searcher | Information gathering |
enricher | Data enrichment |
coder | Exploit development |
installer | Tool installation and setup |
pentester | Dedicated penetration testing |
Configuration Parameters
Model identifier for the LLM provider.
Controls randomness (0.0-2.0). Lower values are more deterministic.
Nucleus sampling parameter (0.0-1.0). Controls diversity of output.
Top-k sampling parameter. Limits vocabulary for each step.
Number of completions to generate.
Maximum number of tokens to generate.
Enable JSON output mode (for
simple_json agent type).Reasoning configuration for models that support extended thinking.
effort: Reasoning effort level (low,medium,high)max_tokens: Maximum tokens for reasoning (some providers)
Pricing information per million tokens.
input: Cost per million input tokens (USD)output: Cost per million output tokens (USD)
Example Configurations
OpenAI-Compatible (Custom)
DeepSeek
Moonshot (with Reasoning)
Ollama (Local)
LiteLLM Proxy Integration
TheLLM_SERVER_PROVIDER setting is particularly useful when using LiteLLM proxy:
Reasoning Format Settings
Legacy Reasoning Format
Some providers use string-based reasoning effort:Modern Reasoning Format (Default)
Preserving Reasoning Content
Required by providers like Moonshot that return errors when reasoning content is missing:Built-in Provider Configurations
PentAGI includes pre-built configurations in the Docker image at/opt/pentagi/conf/:
custom-openai.provider.yml- OpenAI-compatible providersdeepseek.provider.yml- DeepSeek APIdeepinfra.provider.yml- DeepInfra platformmoonshot.provider.yml- Moonshot (Kimi) APIopenrouter.provider.yml- OpenRouter aggregatorollama-llama318b.provider.yml- Ollama with Llama 3.1 8Bollama-llama318b-instruct.provider.yml- Ollama Llama 3.1 8B Instructollama-qwen332b-fp16-tc.provider.yml- Ollama Qwen3 32B FP16ollama-qwq32b-fp16-tc.provider.yml- Ollama QwQ 32B FP16vllm-qwen332b-fp16.provider.yml- vLLM server with Qwen3
Creating Custom Configurations
- Create YAML file with agent configurations:
- Mount configuration in docker-compose.yml:
- Configure environment variable:
Validation
PentAGI validates configuration files on startup:- Missing required fields generate warnings
- Invalid values use safe defaults
- Unknown agent types are ignored
- Malformed YAML prevents startup
Troubleshooting
Configuration Not Loading
- Verify file path is correct and accessible
- Check YAML syntax:
yamllint your-config.yml - Ensure file is mounted in Docker container
- Review startup logs for parsing errors
Model Not Found
- Verify model name matches provider’s model ID
- Check API endpoint supports the model
- Ensure API key has access to the model
- Test with direct API call:
Reasoning Errors
If you see reasoning-related errors:- Try toggling
LLM_SERVER_LEGACY_REASONING - Enable
LLM_SERVER_PRESERVE_REASONINGfor providers like Moonshot - Check provider documentation for reasoning format
- Remove reasoning config if provider doesn’t support it
Pricing Issues
Pricing in config is informational only:- Used for cost estimation in PentAGI UI
- Does not affect actual API billing
- Update values to match current provider rates
- Omit
pricesection if not tracking costs
Best Practices
- Start with built-in configs - Use pre-built configurations as templates
- Test incrementally - Verify each agent type works before adding more
- Document changes - Add comments to YAML explaining customizations
- Version control - Track configuration changes in git
- Monitor costs - Keep pricing information updated for accurate estimates
- Use appropriate models - Match model capabilities to agent requirements
- Validate regularly - Test configurations after provider API updates