Skip to main content

Installation methods

RAPTOR offers two installation approaches:
  1. Manual installation - Install on your own system
  2. Devcontainer - Pre-configured Docker environment with all tools
Important: Unless you use the devcontainer, RAPTOR will automatically install tools without asking. Review DEPENDENCIES.md first to check licenses.

Manual installation

1

Install Claude Code

Download Claude Code from https://claude.ai/downloadThis is the recommended interface for interactive security research.
2

Clone RAPTOR repository

git clone https://github.com/gadievron/raptor.git
cd raptor
3

Install Python dependencies

pip install -r requirements.txt
Required packages:
  • requests (Apache 2.0)
  • anthropic (MIT)
  • tabulate (MIT)
  • Additional packages listed in requirements.txt
4

Install Semgrep

pip install semgrep
License: LGPL 2.1 - Review license terms before use
5

Configure environment variables

Set up your LLM provider:
# For Anthropic Claude (recommended)
export ANTHROPIC_API_KEY=your-key-here

# For OpenAI
export OPENAI_API_KEY=your-key-here

# For local Ollama (free)
export OLLAMA_HOST=http://localhost:11434

# Optional: LiteLLM configuration
export LITELLM_CONFIG_PATH=/path/to/litellm_config.yaml
6

Open in Claude Code

claude
Then say “hi” to get started!

Devcontainer installation

A devcontainer with all prerequisites pre-installed is available for easy onboarding.
1

Clone the repository

git clone https://github.com/gadievron/raptor.git
cd raptor
2

Open in VS Code

Use the command Dev Container: Open Folder in Container in VS Code or any of its forks (Cursor, Windsurf, etc.)
3

Or build with Docker

docker build -f .devcontainer/Dockerfile -t raptor-devcontainer:latest .
The devcontainer is massive (~6GB), starting with Microsoft Python 3.12 devcontainer and adding static analysis, fuzzing, and browser automation tools.

Devcontainer contents

Pre-installed security tools:
  • Semgrep (static analysis)
  • CodeQL CLI v2.15.5 (semantic code analysis)
  • AFL++ (fuzzing)
  • rr debugger (deterministic record-replay debugging)
Build & debugging tools:
  • gcc, g++, clang-format, make, cmake, autotools
  • gdb, gdb-multiarch, binutils
Web testing (alpha):
  • Playwright browser automation (Chromium, Firefox, Webkit browsers)
  • All Playwright browsers pre-downloaded
Runtime notes:
  • Runs with --privileged flag (required for rr debugger)
  • PYTHONPATH configured for /workspaces/raptor imports
  • OSS forensics requires GOOGLE_APPLICATION_CREDENTIALS for BigQuery

Dependencies

Required tools

Install:
pip install semgrep
License: LGPL 2.1Source: https://github.com/semgrep/semgrepUsage: RAPTOR calls semgrep command-line tool
User installs separately, not bundled with RAPTOR
Install:
pip install -r requirements.txt
Includes:
  • requests (Apache 2.0)
  • anthropic (MIT)
  • tabulate (MIT)
  • Additional packages for LLM integration, analysis, and reporting
Managed by pip, not bundled with RAPTOR

Optional tools

Install these tools when you need specific capabilities:
Install:
# macOS
brew install afl++

# Ubuntu/Debian
sudo apt install afl++
License: Apache 2.0Source: https://github.com/AFLplusplus/AFLplusplusUsage: RAPTOR calls afl-fuzz command when using /fuzz
Required for binary fuzzing workflows
Install: Download from https://github.com/github/codeql-cli-binariesLicense: GitHub CodeQL Terms (free for security research, no commercial use)Source: https://github.com/github/codeqlUsage: RAPTOR calls codeql command for deep analysis
Important: CodeQL does not allow commercial use. Review license terms carefully.
Install locally: Download from https://ollama.aiConfigure remote:
# Remote Ollama server
export OLLAMA_HOST=https://ollama.example.com:11434

# Remote with custom port
export OLLAMA_HOST=http://192.168.1.100:8080
Default: http://localhost:11434License: MITSource: https://github.com/ollama/ollamaUsage: RAPTOR connects to Ollama server for local model inference
Supports both local and remote Ollama servers. Remote servers automatically use longer retry delays (5 seconds vs 2 seconds for local) to account for network latency.
Install:
# Linux only
sudo apt install rr
Or build from https://github.com/rr-debugger/rrLicense: MITSource: https://github.com/rr-debugger/rrUsage: RAPTOR uses for deterministic debugging in /crash-analysis command
Linux only (x86_64 architecture)
Install: Bundled with gcc (no separate install needed)License: GPL (part of GCC)Source: https://gcc.gnu.org/onlinedocs/gcc/Gcov.htmlUsage: RAPTOR uses for code coverage analysis in /crash-analysis command
Automatically available with gcc installation
Install: Built into gcc >= 4.8 and clang >= 3.1Compile flag: -fsanitize=addressLicense: Apache 2.0Source: https://github.com/google/sanitizersUsage: RAPTOR detects ASAN builds for enhanced crash diagnostics
Compile-time instrumentation, enabled via compiler flag
Setup: Requires GOOGLE_APPLICATION_CREDENTIALS environment variable
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
License: Google Cloud Terms of ServiceSource: https://cloud.google.com/bigqueryUsage: RAPTOR uses for GitHub Archive queries in /oss-forensics commandFeatures: Query immutable GitHub event data for forensic investigations
Optional - required only for /oss-forensics command

System tools (pre-installed)

These tools are typically pre-installed on most systems:
Pre-installed: macOS (Xcode Command Line Tools)License: Apache 2.0 (part of LLVM)Usage: RAPTOR uses for crash analysis on macOS
Part of operating system, not bundled
Pre-installed: Most Linux distributionsLicense: GPL v3Usage: RAPTOR uses for crash analysis on LinuxInstall on macOS (if needed):
brew install gdb
Part of operating system on Linux, not bundled
Tools: nm, addr2line, objdump, file, strings (GNU Binutils)Pre-installed: macOS and most Linux distributionsLicense: GPL v3Usage: RAPTOR uses for binary analysis
Part of operating system, not bundled

Environment variables

LLM configuration

export ANTHROPIC_API_KEY=your-anthropic-key-here

BigQuery (for OSS forensics)

export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
Only required for /oss-forensics command

LLM provider performance

Experimental benchmark for exploit generation:
ProviderExploit QualityCost
Anthropic Claude✓ Compilable C code~$0.03/vuln
OpenAI GPT-4✓ Compilable C code~$0.03/vuln
Gemini 2.5✓ Compilable C code~$0.03/vuln
Ollama (local)✗ Often brokenFREE
Note: Exploit generation requires frontier models (Claude, GPT, or Gemini). Local models work for analysis but may produce non-compilable exploit code.

Performance tuning

Remote Ollama servers automatically use longer retry delays to account for network latency:
Server TypeBase DelayRetry 1Retry 2Retry 3
Local2.0s2s4s8s
Remote5.0s5s10s20s

License compliance

RAPTOR’s license

License: MIT Copyright: Gadi Evron, Daniel Cuthbert, Thomas Dullien (Halvar Flake), and Michael Bargury See: LICENSE file for full text

External tool licenses

RAPTOR does not bundle external tools. Users install them separately according to each tool’s license terms. Summary:
  • Semgrep (LGPL 2.1) - User installs
  • AFL++ (Apache 2.0) - User installs
  • CodeQL (GitHub Terms) - User installs
  • Python packages (various open source) - User installs via pip
  • System tools (GPL v3, Apache 2.0) - Pre-installed on OS
For commercial or restricted use:
  • Review Semgrep license (LGPL 2.1) for your use case
  • Review CodeQL terms (free for security research, restrictions apply)
  • GPL tools (GDB, binutils) are used as command-line tools, not linked libraries
You should review all respective tool licenses on your own. The above is merely informational.
RAPTOR’s MIT license applies only to RAPTOR’s code, not to external tools users install.

Troubleshooting

Common issues

Solution: Use the devcontainer to get all tools bundled, or review DEPENDENCIES.md before running RAPTOR to understand what will be installed.
Solution: Download CodeQL CLI from https://github.com/github/codeql-cli-binaries and add to PATH.
Solution: Install AFL++:
# macOS
brew install afl++

# Ubuntu/Debian
sudo apt install afl++
Solution: Verify Ollama is running:
# Check if Ollama is running
curl $OLLAMA_HOST/api/tags

# Start Ollama (if local)
ollama serve
For remote servers, ensure OLLAMA_HOST is set correctly with protocol and port.
Solution: Ensure environment variables are set:
# Verify API key is set
echo $ANTHROPIC_API_KEY
echo $OPENAI_API_KEY

# Set in current shell
export ANTHROPIC_API_KEY=your-key-here

# Or add to ~/.bashrc or ~/.zshrc
echo 'export ANTHROPIC_API_KEY=your-key-here' >> ~/.bashrc
Solution: rr is Linux-only (x86_64). On macOS, RAPTOR falls back to LLDB. Ensure you’re running on a supported platform:
# Check architecture
uname -m

# Should output: x86_64
Solution: Set up Google Cloud credentials:
# Set credentials path
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json

# Verify file exists
ls -l $GOOGLE_APPLICATION_CREDENTIALS
Solution: Ensure PYTHONPATH is set correctly:
# Add RAPTOR to PYTHONPATH
export PYTHONPATH=/path/to/raptor:$PYTHONPATH

# Or run from RAPTOR root directory
cd /path/to/raptor
python3 raptor.py

Getting help

Next steps

Quick start

Get up and running with your first scan

Architecture

Learn about RAPTOR’s technical architecture

Commands

Explore available commands and capabilities

LLM Configuration

Configure LiteLLM and cost management

Build docs developers (and LLMs) love