Installation & Setup
How do I install RAPTOR?
How do I install RAPTOR?
What are the system requirements?
What are the system requirements?
- Python 3.12+
- 4GB RAM
- 2GB disk space
- Claude Code or compatible editor
- 8GB RAM
- 10GB disk space
- Docker installed
- Linux x86_64 for full features (rr debugger)
- Linux (x86_64) - Full support
- macOS (ARM64/Intel) - Partial support (no rr)
- Windows (WSL2) - Partial support (no rr)
Why does RAPTOR auto-download tools without asking?
Why does RAPTOR auto-download tools without asking?
- A command requires a tool that isn’t installed
- RAPTOR detects missing dependencies
- Use the devcontainer (all tools pre-installed)
- Pre-install tools manually (see Dependencies)
- Review DEPENDENCIES.md before first use
Do I need to install Semgrep and CodeQL manually?
Do I need to install Semgrep and CodeQL manually?
Can I use RAPTOR without Claude Code?
Can I use RAPTOR without Claude Code?
LLM Provider Setup
Which LLM providers does RAPTOR support?
Which LLM providers does RAPTOR support?
- Anthropic Claude (recommended)
- OpenAI GPT-4
- Google Gemini
- Ollama (local models)
Can I use local models with Ollama?
Can I use local models with Ollama?
| Provider | Exploit Quality | Cost |
|---|---|---|
| Claude/GPT-4 | ✅ Compilable | ~$0.03/vuln |
| Ollama | ❌ Often broken | FREE |
How do I use a remote Ollama server?
How do I use a remote Ollama server?
OLLAMA_HOST to point to your remote server:How do I configure multiple LLM providers with fallback?
How do I configure multiple LLM providers with fallback?
What if I hit API rate limits?
What if I hit API rate limits?
- RAPTOR detects rate limits automatically
- Provides provider-specific guidance:
- Anthropic: Wait time, check usage dashboard
- OpenAI: Retry with exponential backoff
- Others: Generic retry advice
- Suggests fallback providers if configured
Cost Management
How much does RAPTOR cost to run?
How much does RAPTOR cost to run?
- Quick scan (no exploits): ~$0.10-0.30
- Full agentic (with exploits): ~$0.50-2.00
- Per vulnerability exploit: ~$0.03
- Ollama (local inference): FREE
- Scan-only mode (no LLM): FREE
How do I set a budget limit?
How do I set a budget limit?
How do I reduce costs?
How do I reduce costs?
- Use scan-only mode (no LLM, free):
- Skip exploit generation:
- Use local models:
- Analyze only high-priority findings: Use adversarial thinking to focus on secrets and high-impact vulnerabilities first.
- Set budget limits (see above)
Does RAPTOR track costs in real-time?
Does RAPTOR track costs in real-time?
Exploit Generation Quality
Why are my generated exploits not compiling?
Why are my generated exploits not compiling?
- Local models: Ollama models often produce non-compilable code. Use Claude or GPT-4 for exploits.
- Missing context: Ensure RAPTOR has full codebase context (not just snippets).
- Complex vulnerabilities: Some exploits require manual refinement.
- Use frontier models (Claude Opus, GPT-4)
- Enable exploit feasibility analysis:
- Review and manually fix generated code
How accurate is exploit feasibility analysis?
How accurate is exploit feasibility analysis?
- Empirical %n verification (tests actual glibc)
- Null byte constraints from strcpy
- ROP gadget quality (counts usable gadgets)
- Input handler bad bytes
- Full RELRO blocks .fini_array
- Manual exploitation expertise
- Runtime behavior analysis
- Complex heap exploitation
- Likely exploitable: Good primitives, high confidence
- Difficult: Primitives exist but hard to chain
- Unlikely: No known path, suggest environment changes
What should I do if exploit generation says 'Unlikely'?
What should I do if exploit generation says 'Unlikely'?
- Try suggested mitigations (older environment)
- Focus on other vulnerabilities
- Use as information leak only
- Move on to other targets
Can RAPTOR generate ROP chains?
Can RAPTOR generate ROP chains?
- Counts usable ROP gadgets
- Checks for bad bytes in gadgets
- Verifies gadget availability
- Suggests ROP as technique
- Auto-generate full ROP chains (too complex)
- Guarantee chain success
Tool Compatibility
Does RAPTOR work with GitHub Copilot / Cursor / Windsurf?
Does RAPTOR work with GitHub Copilot / Cursor / Windsurf?
- GitHub Copilot
- Cursor
- Windsurf
- Cline
- Devin
Can I use RAPTOR in CI/CD pipelines?
Can I use RAPTOR in CI/CD pipelines?
Does CodeQL require a license for commercial use?
Does CodeQL require a license for commercial use?
- Use Semgrep only (LGPL 2.1)
- Contact GitHub for CodeQL commercial license
- Use other SAST tools
Can I use RAPTOR offline / air-gapped?
Can I use RAPTOR offline / air-gapped?
- Static analysis (Semgrep, CodeQL)
- Binary fuzzing (AFL++)
- Local Ollama models
- Cloud LLM providers (Claude, GPT-4)
- Tool downloads (if not pre-installed)
- OSS forensics (GitHub API, BigQuery)
- Use devcontainer (all tools pre-installed)
- Install Ollama with local models
- Pre-download all dependencies
- Run in air-gapped environment
Is RAPTOR compatible with Windows?
Is RAPTOR compatible with Windows?
- No rr debugger (Linux x86_64 only)
- Some performance overhead
- Docker Desktop required for devcontainer
Troubleshooting
RAPTOR says 'Semgrep not found'
RAPTOR says 'Semgrep not found'
CodeQL database creation fails
CodeQL database creation fails
- CodeQL not in PATH:
- Language not detected:
- Build required (C/C++): Ensure project can compile:
LLM returns JSON parsing errors
LLM returns JSON parsing errors
- Model output truncated
- Network timeout
- Ollama slow response
- Retry: RAPTOR auto-retries with exponential backoff
- Increase timeout (Ollama):
- Use frontier models: Claude/GPT-4 have more reliable JSON output than local models.
Fuzzing crashes immediately
Fuzzing crashes immediately
- Check system config:
- Disable CPU binding (testing):
- Verify binary instrumented:
rr debugger not working
rr debugger not working
--privileged flag:Usage & Workflows
What's the difference between /scan and /agentic?
What's the difference between /scan and /agentic?
| Feature | /scan | /agentic |
|---|---|---|
| Static analysis | ✅ Yes | ✅ Yes |
| LLM analysis | ✅ Yes | ✅ Yes |
| Exploit generation | ❌ No | ✅ Yes (optional) |
| Patch generation | ❌ No | ✅ Yes (optional) |
| Exploitability validation | ❌ No | ✅ Yes (automatic) |
| Cost | ~$0.10-0.30 | ~$0.50-2.00 |
| Speed | Faster | Slower |
How do I analyze only specific vulnerability types?
How do I analyze only specific vulnerability types?
Can RAPTOR fix vulnerabilities automatically?
Can RAPTOR fix vulnerabilities automatically?
How do I use RAPTOR for crash analysis?
How do I use RAPTOR for crash analysis?
/crash-analysis command:- Clones repository
- Builds target with ASAN
- Records crash with rr
- Analyzes with function tracing
- Generates root-cause report
What is OSS forensics and when should I use it?
What is OSS forensics and when should I use it?
/oss-forensics performs evidence-backed forensic investigation for public GitHub repositories.Use cases:- Supply chain investigation
- Malicious commit detection
- Deleted content recovery
- Timeline reconstruction
- Attribution analysis