Recommended Models
For best results, we recommend using one of these frontier models:- OpenAI GPT-5 -
openai/gpt-5 - Anthropic Claude Sonnet 4.6 -
anthropic/claude-sonnet-4-6 - Google Gemini 3 Pro Preview -
vertex_ai/gemini-3-pro-preview
Strix Router
Strix Router provides a single API key for accessing multiple LLM providers with intelligent routing and $10 free credit on signup.strix/gpt-5strix/claude-sonnet-4-6strix/gemini-3-pro-preview
When using the
strix/ prefix, the API base URL is automatically set to https://models.strix.ai/api/v1. You don’t need to configure LLM_API_BASE.Cloud Providers
OpenAI
openai/gpt-5openai/gpt-4oopenai/o1openai/o3-mini
Anthropic
anthropic/claude-sonnet-4-6anthropic/claude-opus-4anthropic/claude-3.5-sonnet
Google Cloud (Vertex AI)
vertex_ai/gemini-3-pro-previewvertex_ai/gemini-2.0-flash-expvertex_ai/gemini-1.5-pro
Vertex AI uses Google Cloud authentication. You need to authenticate via
gcloud auth application-default login or set GOOGLE_APPLICATION_CREDENTIALS to your service account JSON file.AWS Bedrock
bedrock/anthropic.claude-sonnet-4-6-v1:0bedrock/anthropic.claude-opus-4-v1:0bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
AWS Bedrock uses AWS credentials. Configure your credentials via AWS CLI (
aws configure) or environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION).Azure OpenAI
Local Models
Ollama
Ollama lets you run LLMs locally on your machine.You don’t need to set
LLM_API_KEY when using Ollama. Make sure Ollama is running (ollama serve) before starting Strix.ollama/llama3.1:70bollama/qwen2.5:72bollama/deepseek-v3
LM Studio
LM Studio provides a local server for running LLMs with an OpenAI-compatible API.When using LM Studio, the model name should match the model loaded in LM Studio, or use a generic name like
openai/local-model.Other Local Providers
Strix works with any OpenAI-compatible API endpoint:Advanced Configuration
Custom Timeouts
Adjust LLM request timeouts for slower models or connections:Retry Configuration
Control how many times Strix retries failed LLM requests:Reasoning Effort
Control the reasoning effort level for better or faster responses:Provider-Specific Notes
Using Multiple Providers
You can use different models for different purposes by switchingSTRIX_LLM:
Authentication Priority
Strix checks for credentials in this order:LLM_API_KEYenvironment variable- Provider-specific environment variables (e.g.,
OPENAI_API_KEY,ANTHROPIC_API_KEY) - Provider-specific authentication mechanisms (e.g., gcloud, AWS credentials)
Base URL Priority
Strix checks for base URLs in this order:- Automatic detection for
strix/models LLM_API_BASEenvironment variableOPENAI_API_BASEenvironment variableLITELLM_BASE_URLenvironment variableOLLAMA_API_BASEenvironment variable- Provider default URLs
Troubleshooting
Connection Failed
If you see “LLM CONNECTION FAILED”, verify:- Your API key is correct and has the necessary permissions
- Your model name is correct (e.g.,
openai/gpt-5, notgpt-5) - Your API base URL is correct (for local models)
- Your network can reach the API endpoint
- Your model/deployment exists and is accessible
Model Not Found
Ensure you’re using the correct provider prefix:- ✅
openai/gpt-5 - ❌
gpt-5
Rate Limiting
If you hit rate limits, you can:- Reduce
STRIX_REASONING_EFFORTto make fewer requests - Use a different model with higher rate limits
- Increase your API plan limits
Local Model Performance
For best results with local models:- Use models with at least 70B parameters
- Ensure you have sufficient RAM/VRAM
- Use GPU acceleration when possible
- Consider using
--scan-mode quickfor faster scans