Skip to main content

Prerequisites

Before you begin, ensure you have:
1

Docker installed and running

Strix uses Docker containers to create isolated security testing environments.Verify Docker is running:
docker --version
If not installed, download from docker.com
2

An LLM API key

You need an API key from any supported provider:
  • OpenAI (recommended: GPT-5)
  • Anthropic (Claude Sonnet 4.6)
  • Google (Gemini 3 Pro Preview)
  • Strix Router — single API key for multiple providers with $10 free credit at models.strix.ai
Or use local models with Ollama or LMStudio

Installation and first scan

Get Strix running in three commands:
1

Install Strix

Use the official install script:
curl -sSL https://strix.ai/install | bash
This installs the strix command globally. Verify installation:
strix --version
You can also install via pip: pip install strix-agent
2

Configure your AI provider

Set your LLM model and API key:
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-openai-api-key"
Strix automatically saves your configuration to ~/.strix/cli-config.json, so you don’t need to re-enter it on every run.
3

Run your first scan

Start a security assessment of your application:
strix --target ./app-directory
The first run automatically pulls the Docker sandbox image, which may take a few minutes.

What happens during a scan

When you run Strix, here’s what happens:
  1. Environment setup - Strix validates your configuration and pulls the Docker image if needed
  2. LLM warm-up - Tests connection to your LLM provider
  3. Target analysis - Determines target type (code, web app, repository)
  4. Agent orchestration - Launches specialized security testing agents
  5. Vulnerability discovery - Agents explore, test, and validate findings
  6. Proof-of-concept creation - Generates PoCs for discovered vulnerabilities
  7. Report generation - Creates detailed reports with reproduction steps
Results are saved to strix_runs/<run-name> with:
  • JSON vulnerability reports
  • Proof-of-concept code
  • HTTP request/response logs
  • Agent execution traces

Understanding the output

Interactive mode (default)

By default, Strix runs in interactive mode with a text-based UI:
  • Live agent activity - Watch agents work in real-time
  • Vulnerability feed - See findings as they’re discovered
  • Logs and traces - Detailed execution information
  • Progress tracking - Visual status of scan progress
Press Ctrl+C to exit gracefully.

Non-interactive mode

For CI/CD and automation, use the -n flag:
strix -n --target https://your-app.com
This mode:
  • Prints vulnerability findings to stdout
  • Exits automatically when complete
  • Returns exit code 2 if vulnerabilities found
  • Returns exit code 0 if no vulnerabilities

Next steps

Now that you’ve run your first scan, explore these topics:

Installation methods

Learn about pip install, Docker options, and Python requirements

Basic usage

Understand target types and common testing patterns

Scan modes

Configure quick, standard, or deep scan modes

Custom instructions

Guide agents to focus on specific vulnerabilities or areas

LLM providers

Configure OpenAI, Anthropic, Google, or local models

CI/CD integration

Add Strix to your deployment pipeline

Common issues

If you see “Docker connection failed”, ensure Docker Desktop is running:
docker ps
If this fails, start Docker Desktop and try again.
Verify your API key is correct and has sufficient credits:
echo $LLM_API_KEY
echo $STRIX_LLM
Test your provider’s API directly or check their status page.
The Docker image is ~2GB and only downloads on first run. Subsequent runs are instant.If the download is interrupted, remove the partial image and retry:
docker rmi ghcr.io/usestrix/sandbox:latest
strix --target ./app
Need more help? Join our Discord community or check troubleshooting.

Build docs developers (and LLMs) love