RAPTOR generates production-ready security patches using LLM analysis combined with secure coding best practices. The system follows OWASP and CWE guidance to ensure comprehensive, maintainable fixes.
Patches are generated using a senior security engineer persona - focusing on defense-in-depth and production readiness, not just quick fixes.
def generate_patch(self, vuln: VulnerabilityContext) -> bool: """Generate secure patch for vulnerability.""" # Read full file for context file_path = vuln.get_full_file_path() with open(file_path) as f: full_file_content = f.read() prompt = f""" You are a senior software security engineer creating a secure patch. Vulnerability: - Type: {vuln.rule_id} - File: {vuln.file_path}:{vuln.start_line} - Description: {vuln.message} Analysis: {json.dumps(vuln.analysis, indent=2)} Vulnerable Code: {vuln.full_code} Full File Content: {full_file_content[:5000]} Create a SECURE PATCH that: 1. Completely fixes the vulnerability 2. Preserves all existing functionality 3. Follows the code's existing style and patterns 4. Includes clear comments explaining the fix 5. Adds input validation/sanitization where needed 6. Uses modern security best practices Provide: 1. The complete fixed code 2. Clear explanation of what changed and why 3. Testing recommendations """ response = self.llm.generate( prompt=prompt, temperature=0.3 # Lower temperature for safer patches ) # Save patch with metadata...
void test_handle_request() { // Test normal input handle_request("normal input"); // Test boundary condition char boundary[255]; memset(boundary, 'A', 254); boundary[254] = '\0'; handle_request(boundary); // Test overflow attempt char overflow[300]; memset(overflow, 'A', 299); overflow[299] = '\0'; handle_request(overflow); // Should reject}
Integration Tests:
Send normal requests - verify functionality
Send maximum-length inputs - verify acceptance
Send oversized inputs - verify rejection
Check logs for error messages
Security Tests:
Fuzz with AFL or libFuzzer
Run with AddressSanitizer
Verify no regression in existing tests
Generated by RAPTOR Autonomous Security AgentReview and test before applying to production
## Quality ChecklistBefore accepting a patch:<Tabs> <Tab title="Security"> <Check>Fixes vulnerability completely</Check> <Check>No new vulnerabilities introduced</Check> <Check>Follows defense-in-depth principles</Check> <Check>References security guidance (OWASP, CWE)</Check> <Check>Uses secure APIs where available</Check> </Tab> <Tab title="Functionality"> <Check>Maintains existing behavior</Check> <Check>Handles all code paths</Check> <Check>Includes error handling</Check> <Check>Preserves performance characteristics</Check> </Tab> <Tab title="Code Quality"> <Check>Follows project coding style</Check> <Check>Includes explanatory comments</Check> <Check>Uses meaningful variable names</Check> <Check>Avoids over-engineering</Check> </Tab> <Tab title="Testing"> <Check>Includes test recommendations</Check> <Check>Covers normal cases</Check> <Check>Covers boundary conditions</Check> <Check>Covers attack scenarios</Check> </Tab></Tabs>## Deployment Process<Steps> <Step title="Review Patch"> Carefully review the generated patch: - Verify it fixes the root cause - Check for edge cases - Ensure no functionality breakage - Validate security properties </Step> <Step title="Test Thoroughly"> Test in development environment. Apply the patch and run all tests including security and fuzz tests to verify the fix works correctly. </Step> <Step title="Security Review"> Have security team review: - Does it address the vulnerability? - Are there any bypasses? - Could it introduce new issues? </Step> <Step title="Deploy Safely"> Use staged rollout: 1. Deploy to staging 2. Monitor for errors 3. Deploy to 5% of production 4. Monitor metrics 5. Full production rollout </Step></Steps>## Best Practices### Prefer Library FunctionsUse well-tested security libraries instead of custom implementations. For example, use bcrypt for password hashing instead of implementing your own with SHA256.### Validate at BoundariesValidate inputs at system boundaries before processing them. Check types, ranges, and formats at API entry points.### Document Security DecisionsInclude comments explaining security rationale, especially for non-obvious choices like constant-time comparisons or specific algorithm selections.### Test Edge CasesTest boundary conditions including empty input, maximum-length input, NULL pointers, integer overflow conditions, and Unicode edge cases.## See Also<CardGroup cols={2}> <Card title="Vulnerability Analysis" icon="magnifying-glass" href="/analysis/vulnerability-analysis"> LLM-powered security analysis </Card> <Card title="Exploit Generation" icon="code" href="/analysis/exploit-generation"> Generate exploit PoCs </Card></CardGroup>