What a compliance report contains
Every report includes:- Governance and determinism scores — letter grade (A–F) and numeric score (0–100)
- Per-article pass/fail status — EU AI Act Articles 9, 11, 12, and 14
- Findings — each finding includes the rule ID, severity, file location, article reference, and a fix snippet
- Compliance summary — total findings by severity (CRITICAL, HIGH, MEDIUM, LOW)
Generating reports
Run a scan
Run
drako scan from your project root. Drako analyzes your Python source files using AST-based static analysis — no network connection required.Review findings
The terminal report surfaces compliance findings alongside security and governance findings. Each COM rule shows the EU AI Act article it maps to and a ready-to-apply fix.
Compliance rules
COM-001 — No automatic logging
Severity: HIGH | EU AI Act: Article 12 (Record-keeping) High-risk AI systems must keep logs automatically. Logs must be retained for at least 6 months unless other law requires longer retention. What Drako checks: Scans Python source files for logging infrastructure patterns —audit_log, audit_trail, with_compliance, drako, GovernanceMiddleware, ComplianceMiddleware, structlog, and logging.getLogger.
Fails when: No logging infrastructure is detected in any Python source file.
Regulatory exposure: Fines up to €15M or 3% of worldwide annual revenue.
COM-002 — No human oversight mechanism
Severity: HIGH | EU AI Act: Article 14 (Human oversight) High-risk AI systems must be designed to allow effective human oversight. Humans must be able to intervene and override decisions. What Drako checks: Scans Python source files for human oversight patterns —human_in_the_loop, hitl, require_approval, human_approval, ask_human, manual_review, review_queue, and supervisor.
Fails when: Agents exist in the project but no human oversight mechanism is detected.
COM-003 — No technical documentation
Severity: MEDIUM | EU AI Act: Article 11 (Technical documentation) Before placing a high-risk AI system on the market, providers must draw up technical documentation demonstrating the system meets requirements. What Drako checks: Looks for a non-emptydocs/ directory, README.md referencing AI components, or ARCHITECTURE.md.
COM-004 — No risk management documentation
Severity: MEDIUM | EU AI Act: Article 9 (Risk management system) Providers of high-risk AI systems must implement a risk management system covering the entire lifecycle. What Drako checks: Looks forRISK_ASSESSMENT.md, docs/risk-assessment.md, docs/risks.md, and config or doc content referencing risk_assessment, risk_management, risk_level, or threat_model.
Create RISK_ASSESSMENT.md covering:
- Known and foreseeable risks (misuse, technical failures, safety)
- Risk estimation and evaluation
- Risk mitigation measures
- Residual risk after mitigation
- Agent-specific risks (tool access, data handling, autonomous decisions)
COM-005 — No agent BOM / inventory
Severity: MEDIUM | Reference: OWASP LLM Top 10 Without a component inventory, you cannot track which AI models, tools, and permissions your agents use — making vulnerability response impossible. What Drako checks: Looks for.drako.yaml, agent-bom.json, or AGENT_BOM.md.
COM-006 — No HITL for high-risk actions
Severity: CRITICAL | EU AI Act: Article 14 (Human oversight) Humans must retain meaningful control over high-risk AI decisions. Autonomous execution of destructive actions without a checkpoint is a direct Art. 14 violation. What Drako checks: Identifies tools with side-effect names (delete, write, send, pay, execute, deploy, publish, etc.) and checks whether HITL is configured for them.
Regulatory exposure: Liability for autonomous AI harm. Enforcement actions under EU AI Act.
Compliance scoring
Compliance findings contribute to the overall governance score:| Severity | Score deduction |
|---|---|
| CRITICAL | 20 points |
| HIGH | 10 points |
| MEDIUM | 5 points |
| LOW | 2 points |
SARIF format and GitHub Code Scanning
SARIF output is compatible with GitHub Code Scanning. Upload the results file to get inline PR annotations on the exact lines where compliance issues are found."baselineState": "unchanged" in SARIF output — they won’t block CI but are still visible in Code Scanning.
CI/CD integration
Gate deployments on compliance status:Exporting for auditors and regulators
The JSON output includes acompliance field with per-article status that auditors and regulators can read directly: