Overview
The/test-workflows command runs RAPTOR’s comprehensive test suite to validate that all commands and user scenarios work correctly.
This command is a stub - it triggers the test suite execution but the full workflow testing implementation is still in development.
What It Tests
The test suite validates:- Basic scan - Findings detection without exploit generation
- Full agentic workflow - Complete scan → exploit → patch pipeline
- Binary fuzzing - AFL++ integration and crash collection
- Manual crash validation - Crash analysis workflow
- Tool routing - Correct command execution paths
Usage
Test Output
Each test produces one of three results:- ✓ PASS - Test completed successfully
- ✗ FAIL - Test failed with errors
- ⊘ SKIP - Test skipped (dependencies missing)
Summary Format
0- All tests passed1- One or more tests failed
Use Cases
Validate Changes
Run after modifying RAPTOR code
Pre-Release Testing
Verify workflows before releases
Regression Detection
Catch breaking changes early
CI/CD Integration
Automate testing in pipelines
Duration
Typical test suite execution time: 2-3 minutes Individual test timings:- Basic scan: ~30 seconds
- Agentic workflow: ~60 seconds
- Binary fuzzing: ~45 seconds
- Crash validation: ~20 seconds
- Tool routing: ~10 seconds
Example Output
CI/CD Integration
Add to your continuous integration pipeline:GitHub Actions
GitLab CI
Troubleshooting
Tests Fail with Missing Tools
Ensure all dependencies are installed:Tests Skip Due to Permissions
Some tests require elevated permissions:Test Timeouts
Increase timeout limits for slow systems:See Also
Testing Guide
Comprehensive testing documentation
Contributing
How to add new tests