General
What is Strix?
What is Strix?
- Executes real attacks to validate findings
- Uses LLM-powered agents that think like security researchers
- Provides comprehensive reports with reproduction steps
- Can automatically generate fixes for discovered vulnerabilities
How is Strix different from traditional security scanners?
How is Strix different from traditional security scanners?
- Think contextually - Understand your application’s logic and architecture
- Validate findings - Prove vulnerabilities exist with working exploits
- Adapt dynamically - Learn from responses and adjust testing strategies
- Test comprehensively - Cover business logic, not just known vulnerability patterns
What types of vulnerabilities can Strix detect?
What types of vulnerabilities can Strix detect?
- Access Control - IDOR, privilege escalation, authorization bypass
- Injection - SQL, NoSQL, command injection, XSS
- Server-Side - SSRF, XXE, deserialization, path traversal
- Authentication - JWT vulnerabilities, session management flaws
- Business Logic - Race conditions, workflow manipulation
- Infrastructure - Misconfigurations, exposed services, subdomain takeover
Is Strix free to use?
Is Strix free to use?
- Use it freely for personal and commercial projects
- Modify and customize the source code
- Contribute improvements back to the community
Installation & Setup
What are the system requirements?
What are the system requirements?
- Python 3.12 or higher
- Docker Desktop (running)
- 4 GB RAM available
- 10 GB free disk space
- LLM API key (OpenAI, Anthropic, Google, etc.)
- Python 3.13+
- 8 GB RAM or more
- SSD storage for better performance
- Stable internet connection for LLM API calls
How do I install Strix?
How do I install Strix?
Which LLM providers are supported?
Which LLM providers are supported?
- OpenAI - GPT-5, GPT-4o
- Anthropic - Claude Sonnet 4.6, Claude Opus 4
- Google - Gemini 3 Pro, Gemini 2.0 Flash
- Vertex AI - Google Cloud models
- Azure OpenAI - Azure-hosted models
- AWS Bedrock - Amazon-hosted models
- Local Models - Ollama, LM Studio
- Strix Router - Single API for multiple providers with $10 free credit
Do I need to install Docker?
Do I need to install Docker?
- Safety - Prevents malicious code from affecting your system
- Reproducibility - Consistent testing environment
- Tool availability - Includes all necessary security testing tools
Usage
What targets can Strix test?
What targets can Strix test?
- Local codebases -
strix --target ./app-directory - GitHub repositories -
strix --target https://github.com/org/repo - Live web applications -
strix --target https://your-app.com - APIs -
strix --target https://api.your-app.com - Multiple targets -
strix -t ./code -t https://app.com
How do I run authenticated testing?
How do I run authenticated testing?
How long does a typical scan take?
How long does a typical scan take?
- Target complexity - Simple apps: 10-30 minutes, Complex apps: 1-3 hours
- Scan mode - Quick (~10 min), Standard (~30 min), Deep (1-3 hours)
- Target size - Number of endpoints, pages, and features
- LLM speed - Faster models complete quicker
--scan-mode quick for faster results:Can I integrate Strix into CI/CD?
Can I integrate Strix into CI/CD?
-n/--non-interactive flag for headless mode. The scan exits with a non-zero code when vulnerabilities are found.See CI/CD Integration for more examples.Where are scan results saved?
Where are scan results saved?
strix_runs/<run-name>/:- findings.json - Machine-readable vulnerability data
- report.html - Human-readable HTML report
- report.md - Markdown report
- logs/ - Detailed execution logs
How do I customize what Strix tests?
How do I customize what Strix tests?
--instruction flag to guide testing:Cost & Performance
How much do LLM API calls cost?
How much do LLM API calls cost?
- Quick scan - 2.00
- Standard scan - 8.00
- Deep scan - 20.00
- Model choice (GPT-5 costs more than GPT-4o)
- Target complexity (more endpoints = more calls)
- Scan mode (deep scans use more tokens)
Can I use local models to reduce costs?
Can I use local models to reduce costs?
How can I optimize performance?
How can I optimize performance?
- Use faster models - GPT-5 and Claude Sonnet 4.6 are optimized
- Choose appropriate scan mode - Use
quickfor CI/CD - Narrow scope - Provide specific testing instructions
- Use prompt caching - Enabled by default for supported models
- Adequate resources - Ensure sufficient RAM and CPU
Security & Ethics
Is it legal to use Strix?
Is it legal to use Strix?
- Your own applications and infrastructure
- Client applications with written authorization
- Bug bounty programs (following their rules)
- Authorized penetration testing engagements
- Testing third-party applications without permission
- Scanning public websites without authorization
- Any unauthorized access attempts
What data does Strix send to LLM providers?
What data does Strix send to LLM providers?
- Code snippets and file contents (when testing codebases)
- HTTP requests and responses (when testing web apps)
- Error messages and application output
- Testing instructions and findings
- Your LLM API keys (stored locally only)
- Scan configuration files
- Complete databases or file systems
- Local models (Ollama, LM Studio)
- Azure OpenAI with private endpoints
- AWS Bedrock with your own infrastructure
Does Strix store my data?
Does Strix store my data?
- Scan results - Saved in
strix_runs/directory - Configuration - Saved in
~/.strix/cli-config.json - Docker containers - Cleaned up automatically after scans
Can Strix damage my application?
Can Strix damage my application?
- Tests run in isolated Docker containers
- Read-only by default for code analysis
- Avoids destructive operations
- Resource exhaustion from intensive testing
- Triggering rate limits or abuse detection
- Inadvertent data modification in live systems
- Test in staging/development environments first
- Use read-only database replicas when possible
- Monitor resource usage during scans
- Review findings before applying autofixes
Troubleshooting
Why won't Docker start?
Why won't Docker start?
- Docker not installed - Download from docker.com
- Docker not running - Start Docker Desktop
- Insufficient permissions - Run Docker Desktop as administrator
- Port conflicts - Close applications using ports 48080-48081
What if I get LLM connection errors?
What if I get LLM connection errors?
- Invalid API key - Verify
LLM_API_KEYis correct - Wrong model name - Check supported models in LLM Providers
- Rate limiting - Wait and retry, or upgrade your API plan
- Network issues - Check internet connectivity
How do I report a bug?
How do I report a bug?
- Search existing issues first
- Include system information (OS, Python version, Strix version)
- Provide full error traceback
- List steps to reproduce
- Describe expected vs actual behavior
Still Have Questions?
Join our community:- Discord - discord.gg/strix-ai
- GitHub Discussions - github.com/usestrix/strix/discussions
- Documentation - docs.strix.ai