Supported Providers
Pensar Apex integrates with the following AI providers:Anthropic
Recommended provider with best pentesting performance
OpenAI
GPT-4 and other OpenAI models
AWS Bedrock
Enterprise-grade AI on AWS infrastructure
OpenRouter
Unified API for multiple model providers
vLLM (Local)
Self-hosted open-source models
Configuration Methods
You can configure AI providers in two ways:- Environment Variables (recommended for CLI usage)
- TUI Configuration (interactive setup)
Environment variables take precedence over TUI configuration. This allows you to override settings per-session.
Anthropic (Claude)
Recommended for best penetration testing performance.Setup
Get API Key
Sign up at console.anthropic.com and create an API key.
Recommended Models
claude-sonnet-4-5(default) - Best balance of performance and costclaude-opus-4- Maximum reasoning capability for complex targetsclaude-sonnet-3-5- Previous generation, still highly capable
OpenAI
Supports GPT-4 and other OpenAI models.Setup
Get API Key
Create an API key at platform.openai.com/api-keys.
Available Models
gpt-4- Most capable GPT modelgpt-4-turbo- Faster, cost-effective alternativegpt-3.5-turbo- Budget option (not recommended for complex pentesting)
AWS Bedrock
Run models on AWS infrastructure with enterprise-grade security and compliance.Authentication Methods
Bedrock supports two authentication modes:- Bearer Token
- IAM Credentials
Simple token-based auth:
Supported Models
Bedrock provides access to Claude, Llama, and other models:Model IDs on Bedrock may differ from direct provider APIs. Check the AWS Bedrock documentation for available models in your region.
OpenRouter
Unified API for accessing models from multiple providers (Anthropic, OpenAI, Google, Meta, etc.).Setup
Model Format
OpenRouter usesprovider/model-id format:
anthropic/claude-sonnet-4-5openai/gpt-4google/gemini-prometa-llama/llama-3-70b
vLLM (Local Models)
Run open-source models locally with vLLM for complete data privacy.Setup
See the vLLM Setup Guide for detailed instructions.
Provider Priority
When multiple providers are configured, Pensar Apex checks for API keys in this order:ANTHROPIC_API_KEY(Anthropic)OPENAI_API_KEY(OpenAI)OPENROUTER_API_KEY(OpenRouter)BEDROCK_API_KEYor AWS credentials (Bedrock)LOCAL_MODEL_URL(vLLM)
Configuration File
AI provider settings are stored in~/.pensar/config.json:
Troubleshooting
'No AI provider configured' error
'No AI provider configured' error
Ensure at least one AI provider API key is set:
Rate limit errors
Rate limit errors
Pensar Apex automatically retries on rate limits. If you hit persistent rate limits:
- Anthropic: Upgrade your plan tier at console.anthropic.com
- OpenAI: Check usage limits at platform.openai.com/account/limits
- Bedrock: Request quota increases in AWS Service Quotas
vLLM connection failed
vLLM connection failed
Verify the vLLM server is running:If this fails, ensure vLLM is started and accessible.
AWS Bedrock authentication failed
AWS Bedrock authentication failed
For IAM credentials, verify:Ensure your IAM user/role has
bedrock:InvokeModel permission.Next Steps
Model Selection
Choose the right model for your testing needs
Environment Variables
Complete reference of all configuration options
vLLM Setup
Detailed guide to self-hosting models
Run First Pentest
Start testing with your configured provider

