Timeline
High-risk AI system rules take effect August 2, 2026. Providers placing high-risk AI systems on the EU market must comply by that date or face fines up to €30M or 6% of worldwide annual revenue.Coverage
| Article | Requirement | How Drako covers it |
|---|---|---|
| Art. 9 | Risk management | 97 scan rules, ODD enforcement, magnitude limits |
| Art. 11 | Technical documentation | Agent BOM, compliance reports, context versioning |
| Art. 12 | Record-keeping | Cryptographic audit trail with policy snapshot references |
| Art. 14 | Human oversight | HITL checkpoints, programmable hooks, escalation policies |
Article 9 — Risk management
High-risk AI systems must implement a risk management system covering the entire lifecycle. Drako addresses this through:- 97 scan rules — deterministic static analysis across security, governance, compliance, and determinism categories
- ODD enforcement — lock each agent to its permitted tools, APIs, data sources, and time windows
- Magnitude limits — pre-action guardrails: spend caps, data volume limits, blast radius constraints
COM-004 checks whether your project includes risk assessment documentation. See compliance reports for the full list of compliance rules.
Article 11 — Technical documentation
Providers must draw up technical documentation before placing a high-risk AI system on the market. Drako helps you produce and maintain this documentation:- Agent BOM (
drako bom .) — pure AST inventory of agents, tools, models, prompts, permissions, MCP servers, and framework versions - Compliance reports — structured gap reports mapped to regulatory articles
- Context versioning — every config push creates an immutable SHA-256 snapshot; audit logs reference the exact policy version active at each action
Article 12 — Record-keeping
High-risk AI systems must keep logs automatically, with retention of at least 6 months. Drako’s audit trail:- Logs every agent action, decision, and tool call automatically
- SHA-256 hash chain with Ed25519 signatures — tamper-evident by construction
- Each entry references the policy snapshot active at the time of the action
- Configurable retention (the
eu-ai-acttemplate sets 10 years / 3,650 days)
Article 14 — Human oversight
High-risk AI systems must allow effective human oversight and intervention. Drako provides:- HITL checkpoints — agents pause on high-risk actions (write, execute, payment tools) and escalate to a human supervisor
- Programmable hooks — custom validation scripts at
pre_action,post_action,on_error, andon_session_end - Escalation policies — configurable timeout actions (
rejectby default, so unanswered requests fail safe)
EU AI Act template
Theeu-ai-act template pre-configures all four Article requirements in a single command:
.drako.yaml with:
Compliance scan rules
Drako’sCOM rules map directly to EU AI Act articles and fire during every drako scan:
COM-001 — No automatic logging (Art. 12)
COM-001 — No automatic logging (Art. 12)
Severity: HIGHChecks for logging infrastructure patterns in Python source files:
audit_log, audit_trail, with_compliance, drako, GovernanceMiddleware, structlog, and others.Fails when no logging infrastructure is detected in any Python source file.Regulatory exposure: Fines up to €15M or 3% of worldwide annual revenue.Fix:COM-002 — No human oversight mechanism (Art. 14)
COM-002 — No human oversight mechanism (Art. 14)
Severity: HIGHChecks for human oversight patterns:
human_in_the_loop, hitl, require_approval, human_approval, ask_human, manual_review, review_queue, and others.Fails when agents exist in the project but no human oversight mechanism is detected.Regulatory exposure: Fines up to €15M. Enforcement action if an AI system causes harm without human oversight.Fix:COM-003 — No technical documentation (Art. 11)
COM-003 — No technical documentation (Art. 11)
Severity: MEDIUMChecks for documentation indicators: a non-empty Run
docs/ directory, README.md referencing AI components, or an ARCHITECTURE.md file.Fix: Create a docs/ directory with at minimum:drako bom . to generate the agent inventory section automatically.COM-004 — No risk management documentation (Art. 9)
COM-004 — No risk management documentation (Art. 9)
Severity: MEDIUMChecks for risk assessment files:
RISK_ASSESSMENT.md, docs/risk-assessment.md, docs/risks.md, and others. Also checks config content for risk_assessment, risk_management, risk_level, and threat_model.Fix: Create RISK_ASSESSMENT.md covering identification of known and foreseeable risks, risk estimation, mitigation measures, residual risk evaluation, and agent-specific risks (tool access, data handling, autonomous decisions).COM-005 — No agent BOM / inventory
COM-005 — No agent BOM / inventory
Severity: MEDIUMChecks for BOM files:
.drako.yaml, agent-bom.json, or AGENT_BOM.md.Fix:COM-006 — No HITL for high-risk actions (Art. 14)
COM-006 — No HITL for high-risk actions (Art. 14)
Severity: CRITICALIdentifies tools with side-effect names (
delete, write, send, pay, execute, deploy, etc.) and checks whether a HITL checkpoint is configured for them.Fails when side-effect tools exist but no HITL configuration is found.Regulatory exposure: Liability for autonomous AI harm. Direct Art. 14 violation.Fix:Generating compliance reports
Rundrako scan to get compliance gap reports mapped to EU AI Act articles:
compliance field with per-article pass/fail status: