Skip to main content
CyberStrike supports 15+ LLM providers. Provider configuration lives under the provider key in cyberstrike.json.

Setting a default model

Set a global default model using the top-level model field. The value is a string in provider/model format:
cyberstrike.json
{
  "model": "anthropic/claude-sonnet-4-5"
}
Use small_model to select a lighter model for background tasks like title generation:
cyberstrike.json
{
  "model": "anthropic/claude-sonnet-4-5",
  "small_model": "anthropic/claude-haiku-3-5"
}

Provider-specific options

Each entry under provider can configure credentials, base URLs, and request behavior:
cyberstrike.json
{
  "provider": {
    "anthropic": {
      "options": {
        "apiKey": "{env:ANTHROPIC_API_KEY}",
        "baseURL": "https://api.anthropic.com",
        "timeout": 120000,
        "setCacheKey": false
      }
    }
  }
}
The API key for authenticating with the provider. Use {env:VAR} to read from an environment variable rather than hardcoding a secret.
{ "apiKey": "{env:OPENAI_API_KEY}" }
Override the default API endpoint. Required for custom OpenAI-compatible providers or self-hosted models.
{ "baseURL": "https://my-proxy.example.com/v1" }
Request timeout in milliseconds. Default is 300000 (5 minutes). Set to false to disable timeout entirely.
{ "timeout": 120000 }
{ "timeout": false }
When true, enables promptCacheKey for providers that support prompt caching (e.g., Anthropic). Default is false.
{ "setCacheKey": true }
GitHub Enterprise URL, used when authenticating via the Copilot provider.
{ "enterpriseUrl": "https://github.mycompany.com" }

Enabling and disabling providers

By default, CyberStrike loads all providers for which it can find credentials. Use the top-level disabled_providers and enabled_providers arrays to restrict this. Disable specific providers:
cyberstrike.json
{
  "disabled_providers": ["groq", "mistral"]
}
Allow only specific providers (all others are ignored):
cyberstrike.json
{
  "enabled_providers": ["anthropic", "openai"]
}
When enabled_providers is set, every provider not listed is ignored — even if credentials are available.

Model whitelist and blacklist

Within a provider block, filter which models are shown using whitelist or blacklist arrays:
cyberstrike.json
{
  "provider": {
    "openai": {
      "whitelist": ["gpt-4.1", "o4-mini"],
      "options": {
        "apiKey": "{env:OPENAI_API_KEY}"
      }
    }
  }
}

Adding a custom OpenAI-compatible provider

Run the interactive command to register a new provider:
cyberstrike provider add
This writes a new entry under provider in your global config. To add one manually, set baseURL to any OpenAI-compatible endpoint:
cyberstrike.json
{
  "provider": {
    "my-custom-provider": {
      "options": {
        "apiKey": "{env:MY_PROVIDER_KEY}",
        "baseURL": "https://api.my-provider.com/v1"
      }
    }
  }
}

Supported providers

Cloud API for Claude models. Requires ANTHROPIC_API_KEY.
Model IDNotes
anthropic/claude-sonnet-4-5Balanced performance and speed
anthropic/claude-opus-4Highest capability
anthropic/claude-haiku-3-5Fastest, lowest cost
{
  "provider": {
    "anthropic": {
      "options": { "apiKey": "{env:ANTHROPIC_API_KEY}" }
    }
  }
}
OpenAI API. Requires OPENAI_API_KEY.
Model IDNotes
openai/gpt-4.1Latest GPT-4 generation
openai/o3Reasoning model
openai/o4-miniFast reasoning model
{
  "provider": {
    "openai": {
      "options": { "apiKey": "{env:OPENAI_API_KEY}" }
    }
  }
}
Google Gemini API. Requires GOOGLE_GENERATIVE_AI_API_KEY.
Model IDNotes
google/gemini-2.5-proLong context, multimodal
google/gemini-2.5-flashFast and cost-efficient
{
  "provider": {
    "google": {
      "options": { "apiKey": "{env:GOOGLE_GENERATIVE_AI_API_KEY}" }
    }
  }
}
AWS Bedrock. Uses IAM authentication — no API key required. Credentials are read from the standard AWS credential chain (~/.aws/credentials, instance profile, etc.).
{
  "provider": {
    "amazon-bedrock": {
      "options": {
        "baseURL": "https://bedrock-runtime.us-east-1.amazonaws.com"
      }
    }
  }
}
Set AWS_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY in your environment, or configure an AWS profile.
Azure-hosted OpenAI models. Requires an Azure OpenAI resource endpoint and API key.
{
  "provider": {
    "azure": {
      "options": {
        "apiKey": "{env:AZURE_OPENAI_API_KEY}",
        "baseURL": "https://my-resource.openai.azure.com/openai/deployments/my-deployment"
      }
    }
  }
}
Groq inference API. Requires GROQ_API_KEY.
Model IDNotes
groq/llama-3.3-70b-versatileFast Llama inference
groq/mixtral-8x7b-32768Mixture-of-experts model
{
  "provider": {
    "groq": {
      "options": { "apiKey": "{env:GROQ_API_KEY}" }
    }
  }
}
Mistral AI API. Requires MISTRAL_API_KEY.
Model IDNotes
mistral/mistral-large-latestFlagship model
mistral/codestral-latestCode-optimized
{
  "provider": {
    "mistral": {
      "options": { "apiKey": "{env:MISTRAL_API_KEY}" }
    }
  }
}
DeepSeek API. Requires DEEPSEEK_API_KEY.
Model IDNotes
deepseek/deepseek-chatDeepSeek V3
deepseek/deepseek-reasonerDeepSeek R1 reasoning model
{
  "provider": {
    "deepseek": {
      "options": { "apiKey": "{env:DEEPSEEK_API_KEY}" }
    }
  }
}
Access 100+ models through a single API. Requires OPENROUTER_API_KEY.
{
  "model": "openrouter/anthropic/claude-sonnet-4-5",
  "provider": {
    "openrouter": {
      "options": { "apiKey": "{env:OPENROUTER_API_KEY}" }
    }
  }
}
Together AI API for open-source models. Requires TOGETHER_AI_API_KEY.
{
  "provider": {
    "togetherai": {
      "options": { "apiKey": "{env:TOGETHER_AI_API_KEY}" }
    }
  }
}
Run models locally with Ollama. No API key required.
{
  "model": "ollama/llama3.2",
  "provider": {
    "ollama": {
      "options": {
        "baseURL": "http://localhost:11434/v1"
      }
    }
  }
}
Ollama accepts any GGUF-compatible model. Pull a model first:
ollama pull llama3.2
LM Studio local inference server. No API key required.
{
  "provider": {
    "lmstudio": {
      "options": {
        "baseURL": "http://localhost:1234/v1"
      }
    }
  }
}
Cerebras inference API. Requires CEREBRAS_API_KEY.
{
  "provider": {
    "cerebras": {
      "options": { "apiKey": "{env:CEREBRAS_API_KEY}" }
    }
  }
}
DeepInfra inference API. Requires DEEPINFRA_API_KEY.
{
  "provider": {
    "deepinfra": {
      "options": { "apiKey": "{env:DEEPINFRA_API_KEY}" }
    }
  }
}
Perplexity API for search-augmented models. Requires PERPLEXITY_API_KEY.
{
  "provider": {
    "perplexity": {
      "options": { "apiKey": "{env:PERPLEXITY_API_KEY}" }
    }
  }
}
xAI (Grok) API. Requires XAI_API_KEY.
{
  "provider": {
    "xai": {
      "options": { "apiKey": "{env:XAI_API_KEY}" }
    }
  }
}
Vercel AI Gateway. Requires VERCEL_API_KEY.
{
  "provider": {
    "vercel": {
      "options": { "apiKey": "{env:VERCEL_API_KEY}" }
    }
  }
}

Offline / air-gapped setup with Ollama

To use CyberStrike without any internet access, run Ollama locally and point CyberStrike at it:
1

Install and start Ollama

Follow the Ollama installation guide for your platform, then start the server:
ollama serve
2

Pull a model

ollama pull llama3.2
3

Configure CyberStrike

cyberstrike.json
{
  "$schema": "https://cyberstrike.io/config.json",
  "model": "ollama/llama3.2",
  "enabled_providers": ["ollama"],
  "provider": {
    "ollama": {
      "options": {
        "baseURL": "http://localhost:11434/v1"
      }
    }
  }
}
Setting enabled_providers to ["ollama"] suppresses warnings about missing API keys for cloud providers.

Build docs developers (and LLMs) love