Skip to main content
Human-in-the-Loop (HITL) pauses agent execution before a high-risk action runs and escalates it to a human supervisor. The agent waits for a response — approve or reject — before continuing or stopping. This satisfies EU AI Act Article 14, which requires meaningful human oversight of high-risk AI systems.

How it works

1

Agent attempts a high-risk action

A tool call matches one or more configured HITL triggers — for example, a payment tool type or a spend threshold being crossed.
2

Agent is paused

The enforcement pipeline holds the action and does not forward it for execution. The session state is preserved.
3

Human receives a notification

Drako sends a notification via your configured channel (Slack webhook or email) with the agent ID, tool name, and context.
4

Human approves or rejects

The human responds via the notification or the dashboard. If no response arrives within the timeout window, the configured timeout_action applies.
5

Agent continues or stops

On approval, execution resumes. On rejection, the action is blocked and a PolicyViolationError is raised.

Configuration

policies:
  hitl:
    mode: enforce                  # audit | enforce | off
    triggers:
      tool_types: [write, execute, payment]
      tools: [delete_database, send_wire_transfer]
      trust_score_below: 60
      spend_above_usd: 100.00
      records_above: 1000
      first_time_tool: false
      first_time_action: false
    notification:
      webhook_url: https://hooks.slack.com/services/...
      email: [email protected]
    approval_timeout_minutes: 30
    timeout_action: reject         # reject | allow

Triggers

Any trigger condition being true causes HITL to activate. Multiple triggers combine with OR logic.
TriggerTypeDescription
tool_typeslist[string]Activate for any tool of these types (write, execute, payment)
toolslist[string]Activate for specific named tools
trust_score_belowfloatActivate when the agent’s EigenTrust score drops below this threshold
spend_above_usdfloatActivate when cumulative session spend exceeds this amount
records_aboveintActivate when a tool would access more than N records
first_time_toolboolActivate on first-ever use of any tool
first_time_actionboolActivate on the first action in a new session

Notification channels

notification:
  webhook_url: https://hooks.slack.com/services/T.../B.../...
The webhook receives a JSON payload with the agent ID, tool name, trigger reason, and an approval link.

Timeout behavior

timeout_actionBehaviorWhen to use
rejectBlocks the action if no human responds within the timeout window. Safe default.Production systems, financial operations, any compliance-sensitive context
allowAllows the action if no human responds.Low-stakes actions where availability matters more than strict oversight
The default timeout_action is reject. Changing it to allow means an unattended agent will proceed without human approval if your on-call team is unavailable.

EU AI Act Article 14 compliance

Article 14 of the EU AI Act requires that high-risk AI systems be designed to allow human oversight and, where appropriate, human intervention. Drako’s HITL implementation covers the Article 14 requirements:
  • Oversight — every high-risk action is surfaced to a human before execution
  • Intervention — humans can reject any escalated action
  • Logging — every HITL decision (approve, reject, timeout) is recorded in the cryptographic audit trail with a policy snapshot reference
  • Configurability — trigger conditions and escalation paths are declared in version-controlled YAML
The eu-ai-act policy template pre-configures HITL for the tool types and thresholds typically associated with high-risk operations:
drako init --template eu-ai-act

Testing HITL

Use MockHITLResolver to define per-tool approval rules in tests without blocking CI:
from drako import govern, MockHITLResolver

resolver = MockHITLResolver(
    rules={
        "delete_file": "reject",
        "send_email": "approve",
    },
    default="approve",
)

crew = govern(crew, hitl_resolver=resolver)
For full details on testing governed agents, see Testing.

Build docs developers (and LLMs) love