Interactive security testing with Claude Code slash commands and expert personas
RAPTOR integrates seamlessly with Claude Code, providing an interactive conversational interface for security testing. Use slash commands to trigger analyses and invoke expert personas for specialized guidance.
Invoke personas explicitly when you need specialized expertise:
"Use exploit developer persona to create PoC for finding #42""Use crash analyst persona to analyze this crash""Use fuzzing strategist persona to recommend AFL parameters""Use patch engineer persona to create production-ready patch"
Personas are NOT auto-loaded. They only consume tokens when you explicitly request them.
User: Use patch engineer persona to fix the SQLi in login.phpClaude: [Loads patch_engineer.md - 400 tokens]Creating secure patch for login.php:42…
--- login.php (original)+++ login.php (patched)@@ -40,3 +40,4 @@ function authenticate($username, $password) {- $query = "SELECT * FROM users WHERE name='$username'";+ // Fixed: Use parameterized query to prevent SQL injection+ $stmt = $pdo->prepare("SELECT * FROM users WHERE name=?");+ $stmt->execute([$username]);
Test recommendations:
Test with normal credentials
Test with SQL injection payloads
Verify no regression in login flow
### Security Researcher PersonaValidates exploitability using a 4-step framework:<Steps> <Step title="Source Control Analysis"> Is the data source attacker-controlled? </Step> <Step title="Sanitizer Analysis"> Are sanitizers effective or bypassable? </Step> <Step title="Reachability Analysis"> Can an attacker trigger this code path? </Step> <Step title="Impact Assessment"> What's the worst-case scenario? </Step></Steps>Use this persona to distinguish real vulnerabilities from false positives:
“Use security researcher persona to validate finding #12”
“Is this actually exploitable or a false positive?”
“Security researcher: analyze this sanitizer bypass”
### Crash Analyst PersonaAnalyzes binary crashes from fuzzing:
User: AFL++ found a crash with SIGSEGV at 0x4141414141.
Use crash analyst persona to assess exploitability.Claude: [Loads crash_analyst.md - 700 tokens]CRASH ANALYSISSignal: SIGSEGV (11)
Address: 0x4141414141 (‘AAAA’)ASSESSMENT:
Crash Type: Buffer overflow - controlled write
Register State:
RIP: 0x4141414141 ✓ Fully controlled
RSP: Valid stack range
RAX: Contains user data
Exploit Primitives:
Arbitrary code execution: YES
Controlled jump: YES
Mitigations:
ASLR: Enabled (need info leak)
DEP: Enabled (need ROP chain)
Stack canary: Not present
EXPLOITABILITY: HIGH (Moderate complexity)Strategy: ROP chain + info leak for ASLR bypass
## Adversarial ThinkingRAPTOR prioritizes findings using an adversarial mindset:### Prioritization Formula
Priority = (Impact × Exploitability) / Detection Time
### Default Priority Order<AccordionGroup> <Accordion title="1. Secrets - Instant Compromise" icon="key"> Hardcoded credentials, API keys, tokens - no exploitation needed, instant access </Accordion> <Accordion title="2. Input Validation - High Exploitability" icon="shield-xmark"> SQLi, XSS, command injection - common and highly exploitable </Accordion> <Accordion title="3. Authentication - Critical Impact" icon="lock-open"> Broken access control, authentication bypass - direct compromise </Accordion> <Accordion title="4. Cryptography - Data Protection" icon="fingerprint"> Weak algorithms, hardcoded keys - affects data security </Accordion> <Accordion title="5. Configuration - Security Baseline" icon="gear"> Debug mode, insecure defaults - environmental risks </Accordion></AccordionGroup><Info>You can override the default priority order by telling Claude to use a different ranking based on your threat model.</Info>## Output StructureAll results are saved to the `out/` directory:
### Accessing Results<Tabs> <Tab title="Claude Code"> Claude automatically reads, analyzes, and presents results in a prioritized format </Tab> <Tab title="Python CLI"> Read files directly from the `out/` directory or parse SARIF with your own tools </Tab></Tabs>## Troubleshooting### Placeholder Exploits (TODO Comments)**Issue:** Python generated template code instead of working exploits**Solution:** Use Exploit Developer persona
“Use exploit developer persona to create working exploit for finding #X”
### Template Patches**Issue:** Patches are recommendations, not actual code**Solution:** Use Patch Engineer persona
“Use patch engineer persona to create production-ready patch”
### No Findings Returned**Common Causes:**- Git not initialized (Semgrep needs `.git/` directory)- Wrong policy groups specified- Language not supported by scanners**Solution:** Ask Claude for help
User: Why didn’t the scan find anything?Claude: Let me check:
Git status: [checks if .git exists]
Languages detected: [checks file types]
Policy groups: [reviews configuration]
### LLM API Errors**Python handles automatic fallback:** Claude → GPT-4 → Ollama**Check:**```bash# Verify API key is setecho $ANTHROPIC_API_KEY# Check connectivitycurl -I https://api.anthropic.com
Ensure you have sufficient API credits before running autonomous analysis on large codebases.
User: I found that checking API authentication endpoints first helps me find critical issues fasterClaude: That's a valuable approach!User: /create-skillClaude: What successful approach should we save?User: Focus on API security - check auth endpoints first, then rate limiting, then data validationClaude: [Extracts patterns] [Validates token budget: 380 tokens] Skill: api_security_auth_focus Triggers: API, authentication, auth bypass Create? [Y/n]User: YClaude: ✓ Saved to: tiers/specialists/custom/api_security_auth_focus.md Note: Auto-loading not yet implemented (alpha feature)