provider key in cyberstrike.json.
Setting a default model
Set a global default model using the top-levelmodel field. The value is a string in provider/model format:
cyberstrike.json
small_model to select a lighter model for background tasks like title generation:
cyberstrike.json
Provider-specific options
Each entry underprovider can configure credentials, base URLs, and request behavior:
cyberstrike.json
apiKey
apiKey
The API key for authenticating with the provider. Use
{env:VAR} to read from an environment variable rather than hardcoding a secret.baseURL
baseURL
Override the default API endpoint. Required for custom OpenAI-compatible providers or self-hosted models.
timeout
timeout
Request timeout in milliseconds. Default is
300000 (5 minutes). Set to false to disable timeout entirely.setCacheKey
setCacheKey
When
true, enables promptCacheKey for providers that support prompt caching (e.g., Anthropic). Default is false.enterpriseUrl
enterpriseUrl
GitHub Enterprise URL, used when authenticating via the Copilot provider.
Enabling and disabling providers
By default, CyberStrike loads all providers for which it can find credentials. Use the top-leveldisabled_providers and enabled_providers arrays to restrict this.
Disable specific providers:
cyberstrike.json
cyberstrike.json
Model whitelist and blacklist
Within a provider block, filter which models are shown usingwhitelist or blacklist arrays:
cyberstrike.json
Adding a custom OpenAI-compatible provider
Run the interactive command to register a new provider:provider in your global config. To add one manually, set baseURL to any OpenAI-compatible endpoint:
cyberstrike.json
Supported providers
anthropic
anthropic
Cloud API for Claude models. Requires
ANTHROPIC_API_KEY.| Model ID | Notes |
|---|---|
anthropic/claude-sonnet-4-5 | Balanced performance and speed |
anthropic/claude-opus-4 | Highest capability |
anthropic/claude-haiku-3-5 | Fastest, lowest cost |
openai
openai
OpenAI API. Requires
OPENAI_API_KEY.| Model ID | Notes |
|---|---|
openai/gpt-4.1 | Latest GPT-4 generation |
openai/o3 | Reasoning model |
openai/o4-mini | Fast reasoning model |
google
Google Gemini API. Requires
GOOGLE_GENERATIVE_AI_API_KEY.| Model ID | Notes |
|---|---|
google/gemini-2.5-pro | Long context, multimodal |
google/gemini-2.5-flash | Fast and cost-efficient |
amazon-bedrock
amazon-bedrock
AWS Bedrock. Uses IAM authentication — no API key required. Credentials are read from the standard AWS credential chain (
~/.aws/credentials, instance profile, etc.).azure
azure
Azure-hosted OpenAI models. Requires an Azure OpenAI resource endpoint and API key.
groq
groq
Groq inference API. Requires
GROQ_API_KEY.| Model ID | Notes |
|---|---|
groq/llama-3.3-70b-versatile | Fast Llama inference |
groq/mixtral-8x7b-32768 | Mixture-of-experts model |
mistral
mistral
Mistral AI API. Requires
MISTRAL_API_KEY.| Model ID | Notes |
|---|---|
mistral/mistral-large-latest | Flagship model |
mistral/codestral-latest | Code-optimized |
deepseek
deepseek
DeepSeek API. Requires
DEEPSEEK_API_KEY.| Model ID | Notes |
|---|---|
deepseek/deepseek-chat | DeepSeek V3 |
deepseek/deepseek-reasoner | DeepSeek R1 reasoning model |
openrouter
openrouter
Access 100+ models through a single API. Requires
OPENROUTER_API_KEY.togetherai
togetherai
Together AI API for open-source models. Requires
TOGETHER_AI_API_KEY.ollama
ollama
Run models locally with Ollama. No API key required.Ollama accepts any GGUF-compatible model. Pull a model first:
lmstudio
lmstudio
LM Studio local inference server. No API key required.
cerebras
cerebras
Cerebras inference API. Requires
CEREBRAS_API_KEY.deepinfra
deepinfra
DeepInfra inference API. Requires
DEEPINFRA_API_KEY.perplexity
perplexity
Perplexity API for search-augmented models. Requires
PERPLEXITY_API_KEY.xai
xai
xAI (Grok) API. Requires
XAI_API_KEY.vercel
vercel
Vercel AI Gateway. Requires
VERCEL_API_KEY.Offline / air-gapped setup with Ollama
To use CyberStrike without any internet access, run Ollama locally and point CyberStrike at it:Install and start Ollama
Follow the Ollama installation guide for your platform, then start the server:
Setting
enabled_providers to ["ollama"] suppresses warnings about missing API keys for cloud providers.