Skip to main content
Pensar Apex supports multiple AI providers. Anthropic models provide the best performance and are recommended for optimal results.

Supported Providers

Pensar Apex integrates with the following AI providers:

Anthropic

Recommended provider with best pentesting performance

OpenAI

GPT-4 and other OpenAI models

AWS Bedrock

Enterprise-grade AI on AWS infrastructure

OpenRouter

Unified API for multiple model providers

vLLM (Local)

Self-hosted open-source models

Configuration Methods

You can configure AI providers in two ways:
  1. Environment Variables (recommended for CLI usage)
  2. TUI Configuration (interactive setup)
Environment variables take precedence over TUI configuration. This allows you to override settings per-session.

Anthropic (Claude)

Recommended for best penetration testing performance.

Setup

1

Get API Key

Sign up at console.anthropic.com and create an API key.
2

Set Environment Variable

export ANTHROPIC_API_KEY="sk-ant-..."
3

Verify Configuration

pensar doctor
You should see:
✓ Anthropic API key configured
  • claude-sonnet-4-5 (default) - Best balance of performance and cost
  • claude-opus-4 - Maximum reasoning capability for complex targets
  • claude-sonnet-3-5 - Previous generation, still highly capable
For most pentesting scenarios, claude-sonnet-4-5 provides excellent results at reasonable cost.

OpenAI

Supports GPT-4 and other OpenAI models.

Setup

1

Get API Key

Create an API key at platform.openai.com/api-keys.
2

Set Environment Variable

export OPENAI_API_KEY="sk-..."
3

Select Model

When launching pentest, specify the model:
pensar pentest --target https://example.com --model gpt-4

Available Models

  • gpt-4 - Most capable GPT model
  • gpt-4-turbo - Faster, cost-effective alternative
  • gpt-3.5-turbo - Budget option (not recommended for complex pentesting)
GPT models may not perform as well as Claude for security testing tasks. Use Anthropic when possible.

AWS Bedrock

Run models on AWS infrastructure with enterprise-grade security and compliance.

Authentication Methods

Bedrock supports two authentication modes:
Simple token-based auth:
export BEDROCK_API_KEY="your-bearer-token"
export AWS_REGION="us-east-1"

Supported Models

Bedrock provides access to Claude, Llama, and other models:
# Use Claude on Bedrock
pensar pentest --target https://example.com --model anthropic.claude-sonnet-4-5

# Use Llama on Bedrock
pensar pentest --target https://example.com --model meta.llama3-70b
Model IDs on Bedrock may differ from direct provider APIs. Check the AWS Bedrock documentation for available models in your region.

OpenRouter

Unified API for accessing models from multiple providers (Anthropic, OpenAI, Google, Meta, etc.).

Setup

1

Get API Key

Sign up at openrouter.ai and create an API key.
2

Set Environment Variable

export OPENROUTER_API_KEY="sk-or-..."
3

Select Model

pensar pentest --target https://example.com --model anthropic/claude-sonnet-4-5

Model Format

OpenRouter uses provider/model-id format:
  • anthropic/claude-sonnet-4-5
  • openai/gpt-4
  • google/gemini-pro
  • meta-llama/llama-3-70b
See openrouter.ai/models for all available models.

vLLM (Local Models)

Run open-source models locally with vLLM for complete data privacy.

Setup

1

Start vLLM Server

# Install vLLM
pip install vllm

# Start server with a model
vllm serve meta-llama/Llama-3.1-70B-Instruct --port 8000
2

Configure Pensar Apex

Set the local model endpoint:
export LOCAL_MODEL_URL="http://localhost:8000/v1"
3

Specify Model Name

In the TUI, go to Models screen and enter the model name in “Custom local model (vLLM)” input.Or via CLI:
pensar pentest --target https://example.com --model meta-llama/Llama-3.1-70B-Instruct
Local models may not perform as well as Claude for pentesting. Use for offline scenarios or when data privacy is critical.
See the vLLM Setup Guide for detailed instructions.

Provider Priority

When multiple providers are configured, Pensar Apex checks for API keys in this order:
  1. ANTHROPIC_API_KEY (Anthropic)
  2. OPENAI_API_KEY (OpenAI)
  3. OPENROUTER_API_KEY (OpenRouter)
  4. BEDROCK_API_KEY or AWS credentials (Bedrock)
  5. LOCAL_MODEL_URL (vLLM)
You can override the default by explicitly specifying --model on the command line.

Configuration File

AI provider settings are stored in ~/.pensar/config.json:
{
  "anthropicAPIKey": "sk-ant-...",
  "openAiAPIKey": null,
  "openRouterAPIKey": null,
  "bedrockAPIKey": null,
  "localModelUrl": null,
  "localModelName": null,
  "selectedModelId": "claude-sonnet-4-5",
  "responsibleUseAccepted": true
}
Environment variables always take precedence over config file values.

Troubleshooting

Ensure at least one AI provider API key is set:
# Check current config
pensar doctor

# Set API key
export ANTHROPIC_API_KEY="your-key"
Pensar Apex automatically retries on rate limits. If you hit persistent rate limits:
Verify the vLLM server is running:
curl http://localhost:8000/v1/models
If this fails, ensure vLLM is started and accessible.
For IAM credentials, verify:
aws sts get-caller-identity
Ensure your IAM user/role has bedrock:InvokeModel permission.

Next Steps

Model Selection

Choose the right model for your testing needs

Environment Variables

Complete reference of all configuration options

vLLM Setup

Detailed guide to self-hosting models

Run First Pentest

Start testing with your configured provider

Build docs developers (and LLMs) love