Skip to main content

Overview

Provider endpoints allow you to manage and configure Large Language Model (LLM) providers used by PentAGI for AI-powered penetration testing.

Get All Providers

Retrieve a list of available LLM providers.
GET /api/v1/providers

Request Example

curl "https://your-server/api/v1/providers" \
  -H "Cookie: auth=your-session-cookie" \
  -H "Content-Type: application/json"

Response

data
array
required
Array of provider information objects
{
  "success": true,
  "data": [
    {
      "name": "default-openai",
      "type": "openai"
    },
    {
      "name": "default-anthropic",
      "type": "anthropic"
    },
    {
      "name": "my-custom-ollama",
      "type": "ollama"
    },
    {
      "name": "production-gemini",
      "type": "gemini"
    }
  ]
}

Provider Object

name
string
required
Unique name of the provider configuration
type
string
required
Provider type: openai, anthropic, gemini, bedrock, ollama, or custom

Provider Types

OpenAI

OpenAI API (GPT-4, GPT-4o, etc.)
{
  "name": "default-openai",
  "type": "openai"
}
Environment Configuration:
OPEN_AI_KEY=sk-...
OPEN_AI_SERVER_URL=https://api.openai.com/v1

Anthropic

Anthropic Claude models (Claude 3.5 Sonnet, etc.)
{
  "name": "default-anthropic",
  "type": "anthropic"
}
Environment Configuration:
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_SERVER_URL=https://api.anthropic.com/v1

Google Gemini

Google AI (Gemini Pro, etc.)
{
  "name": "default-gemini",
  "type": "gemini"
}
Environment Configuration:
GEMINI_API_KEY=...
GEMINI_SERVER_URL=https://generativelanguage.googleapis.com

AWS Bedrock

AWS Bedrock hosted models
{
  "name": "default-bedrock",
  "type": "bedrock"
}
Environment Configuration:
BEDROCK_REGION=us-east-1
BEDROCK_ACCESS_KEY_ID=...
BEDROCK_SECRET_ACCESS_KEY=...
BEDROCK_SESSION_TOKEN=...

Ollama

Local Ollama deployment
{
  "name": "local-ollama",
  "type": "ollama"
}
Environment Configuration:
OLLAMA_SERVER_URL=http://localhost:11434
OLLAMA_SERVER_MODEL=llama3.1:70b
OLLAMA_SERVER_PULL_MODELS_ENABLED=true

Custom

Custom OpenAI-compatible endpoints (DeepSeek, OpenRouter, etc.)
{
  "name": "custom-deepseek",
  "type": "custom"
}
Environment Configuration:
LLM_SERVER_URL=https://api.deepseek.com/v1
LLM_SERVER_KEY=...
LLM_SERVER_MODEL=deepseek-chat
LLM_SERVER_PROVIDER=deepseek

Using Providers

When creating flows or assistants, reference providers by name:

Via GraphQL

mutation {
  createFlow(
    modelProvider: "default-openai"
    input: "Perform security assessment"
  ) {
    id
    title
  }
}

Via REST (Flows)

curl -X POST "https://your-server/api/v1/flows" \
  -H "Cookie: auth=your-session-cookie" \
  -H "Content-Type: application/json" \
  -d '{
    "model_provider": "default-anthropic",
    "input": "Test web application security"
  }'

Provider Configuration (GraphQL)

For advanced provider configuration including agent settings, use the GraphQL API:
query {
  settingsProviders {
    enabled {
      openai
      anthropic
      gemini
      bedrock
      ollama
      custom
    }
    default {
      openai {
        id
        name
        type
        agents {
          primaryAgent {
            model
            maxTokens
            temperature
          }
        }
      }
    }
  }
}
See the GraphQL API documentation for complete provider configuration options.

Permissions

To access provider endpoints, users must have:
  • providers.view - View available providers

Error Responses

403 Forbidden

{
  "error": "not_permitted",
  "message": "User does not have permission to view providers"
}

500 Internal Server Error

{
  "error": "internal_error",
  "message": "Failed to retrieve provider list"
}

Provider Selection Best Practices

Choose providers based on your specific requirements:
  • OpenAI: Best overall performance, supports reasoning models
  • Anthropic: Excellent for complex analysis, large context windows
  • Ollama: Privacy-focused local deployment, no external API costs
  • Custom: Flexibility to use any OpenAI-compatible endpoint
Provider availability depends on environment configuration. Ensure API keys and URLs are properly set.

Model Configuration

Each provider supports different models with varying capabilities:
  • gpt-4o - Latest GPT-4 optimized model
  • gpt-4-turbo - Fast GPT-4 variant
  • gpt-4 - Standard GPT-4
  • o1-preview - Reasoning model
  • claude-3-5-sonnet-20241022 - Latest Claude 3.5
  • claude-3-opus-20240229 - Most capable
  • claude-3-sonnet-20240229 - Balanced performance
  • gemini-2.0-flash-exp - Latest experimental
  • gemini-1.5-pro - Pro version
  • gemini-1.5-flash - Fast variant
Any locally available model:
  • llama3.1:70b
  • qwen2.5:72b
  • mistral-large:123b

GraphQL Providers

Advanced provider configuration via GraphQL

Environment Setup

Configure provider API keys and settings

Build docs developers (and LLMs) love