Skip to main content
The EU AI Act imposes mandatory requirements on providers of high-risk AI systems. Drako’s scan rules and runtime enforcement directly address Articles 9, 11, 12, and 14 — the four articles with the most concrete technical obligations.
Drako covers the technical requirements of the EU AI Act but does not constitute legal advice. Consult qualified legal counsel to determine whether your system qualifies as high-risk and what obligations apply to your organization.

Timeline

High-risk AI system rules take effect August 2, 2026. Providers placing high-risk AI systems on the EU market must comply by that date or face fines up to €30M or 6% of worldwide annual revenue.
Use drako init --template eu-ai-act to generate a .drako.yaml pre-configured for all four Article requirements. This is the fastest way to reach a compliant baseline.

Coverage

ArticleRequirementHow Drako covers it
Art. 9Risk management97 scan rules, ODD enforcement, magnitude limits
Art. 11Technical documentationAgent BOM, compliance reports, context versioning
Art. 12Record-keepingCryptographic audit trail with policy snapshot references
Art. 14Human oversightHITL checkpoints, programmable hooks, escalation policies

Article 9 — Risk management

High-risk AI systems must implement a risk management system covering the entire lifecycle. Drako addresses this through:
  • 97 scan rules — deterministic static analysis across security, governance, compliance, and determinism categories
  • ODD enforcement — lock each agent to its permitted tools, APIs, data sources, and time windows
  • Magnitude limits — pre-action guardrails: spend caps, data volume limits, blast radius constraints
The scan rule COM-004 checks whether your project includes risk assessment documentation. See compliance reports for the full list of compliance rules.

Article 11 — Technical documentation

Providers must draw up technical documentation before placing a high-risk AI system on the market. Drako helps you produce and maintain this documentation:
  • Agent BOM (drako bom .) — pure AST inventory of agents, tools, models, prompts, permissions, MCP servers, and framework versions
  • Compliance reports — structured gap reports mapped to regulatory articles
  • Context versioning — every config push creates an immutable SHA-256 snapshot; audit logs reference the exact policy version active at each action

Article 12 — Record-keeping

High-risk AI systems must keep logs automatically, with retention of at least 6 months. Drako’s audit trail:
  • Logs every agent action, decision, and tool call automatically
  • SHA-256 hash chain with Ed25519 signatures — tamper-evident by construction
  • Each entry references the policy snapshot active at the time of the action
  • Configurable retention (the eu-ai-act template sets 10 years / 3,650 days)

Article 14 — Human oversight

High-risk AI systems must allow effective human oversight and intervention. Drako provides:
  • HITL checkpoints — agents pause on high-risk actions (write, execute, payment tools) and escalate to a human supervisor
  • Programmable hooks — custom validation scripts at pre_action, post_action, on_error, and on_session_end
  • Escalation policies — configurable timeout actions (reject by default, so unanswered requests fail safe)

EU AI Act template

The eu-ai-act template pre-configures all four Article requirements in a single command:
drako init --template eu-ai-act
This generates a .drako.yaml with:
# Auto-generated by drako init --template eu-ai-act
extends: eu-ai-act

policies:
  audit:
    cryptographic: true
    retention_days: 3650        # Art. 12 — 10-year retention
  hitl:
    mode: enforce
    triggers:
      tool_types: [write, execute, payment]
    timeout_action: reject      # Art. 14 — fail safe
  dlp:
    mode: enforce               # Art. 9 — risk management
  odd:
    enforcement_mode: enforce   # Art. 9 — operational boundaries

Compliance scan rules

Drako’s COM rules map directly to EU AI Act articles and fire during every drako scan:
Severity: HIGHChecks for logging infrastructure patterns in Python source files: audit_log, audit_trail, with_compliance, drako, GovernanceMiddleware, structlog, and others.Fails when no logging infrastructure is detected in any Python source file.Regulatory exposure: Fines up to €15M or 3% of worldwide annual revenue.Fix:
from drako import with_compliance

# Drako middleware provides EU AI Act compliant audit logging automatically.
crew = with_compliance(my_crew)
Severity: HIGHChecks for human oversight patterns: human_in_the_loop, hitl, require_approval, human_approval, ask_human, manual_review, review_queue, and others.Fails when agents exist in the project but no human oversight mechanism is detected.Regulatory exposure: Fines up to €15M. Enforcement action if an AI system causes harm without human oversight.Fix:
# .drako.yaml
policies:
  hitl:
    mode: enforce
    triggers:
      tool_types: [write, execute, payment]
Severity: MEDIUMChecks for documentation indicators: a non-empty docs/ directory, README.md referencing AI components, or an ARCHITECTURE.md file.Fix: Create a docs/ directory with at minimum:
mkdir -p docs
# docs/architecture.md    — System design and component overview
# docs/agents.md          — Agent inventory, capabilities, and limitations
# docs/risk-assessment.md — Risk analysis (required by Art. 9)
Run drako bom . to generate the agent inventory section automatically.
Severity: MEDIUMChecks for risk assessment files: RISK_ASSESSMENT.md, docs/risk-assessment.md, docs/risks.md, and others. Also checks config content for risk_assessment, risk_management, risk_level, and threat_model.Fix: Create RISK_ASSESSMENT.md covering identification of known and foreseeable risks, risk estimation, mitigation measures, residual risk evaluation, and agent-specific risks (tool access, data handling, autonomous decisions).
Severity: MEDIUMChecks for BOM files: .drako.yaml, agent-bom.json, or AGENT_BOM.md.Fix:
pip install drako
drako init          # creates .drako.yaml with agent inventory
drako bom .         # generate a standalone BOM in text, JSON, or Markdown
Severity: CRITICALIdentifies tools with side-effect names (delete, write, send, pay, execute, deploy, etc.) and checks whether a HITL checkpoint is configured for them.Fails when side-effect tools exist but no HITL configuration is found.Regulatory exposure: Liability for autonomous AI harm. Direct Art. 14 violation.Fix:
# .drako.yaml
policies:
  hitl:
    mode: enforce
    triggers:
      tool_types:
        - write
        - execute
        - payment
      trust_score_below: 60
      spend_above_usd: 100.00
    approval_timeout_minutes: 30
    timeout_action: reject

Generating compliance reports

Run drako scan to get compliance gap reports mapped to EU AI Act articles:
drako scan .                       # terminal output with compliance findings
drako scan . --format json         # machine-readable with per-article status
drako scan . --format sarif        # GitHub Code Scanning compatible
drako scan . --details             # compliance summary with fix snippets
The JSON output includes a compliance field with per-article pass/fail status:
{
  "score": 72,
  "grade": "C",
  "compliance": {
    "eu_ai_act": {
      "art_9": "FAIL",
      "art_11": "PASS",
      "art_12": "FAIL",
      "art_14": "FAIL"
    }
  },
  "findings": [...]
}
See compliance reports for full details on report formats and CI/CD integration.

Build docs developers (and LLMs) love