Skip to main content

Human-in-the-Loop Agent

Human-in-the-Loop (HITL) is a critical pattern in AI agent development that enables agents to pause execution and request human input when needed. This creates safer, more interactive, and collaborative AI systems that can work alongside humans rather than operating in complete autonomy.

Why Human-in-the-Loop Matters

Safety

Prevent agents from taking risky actions without approval

Collaboration

Enable human-AI teamwork on complex tasks

Transparency

Give users visibility and control over agent actions

Trust

Build confidence through human oversight

Use Cases

  • File system operations: Formatting drives, deleting important files
  • Database modifications: Dropping tables, updating critical data
  • System administration: Restarting services, changing configurations
  • Financial transactions: Large purchases, money transfers
  • Content approval: Reviewing generated content before publishing
  • Strategy decisions: Getting human input on business logic
  • Quality control: Human verification of AI-generated outputs
  • Creative work: Iterating on designs, copy, or code with human feedback
  • Ambiguous requests: When user intent is unclear
  • Missing information: Requesting additional details needed for task completion
  • Edge cases: Handling situations not covered in training data
  • Preference gathering: Learning user preferences for personalization
  • Workflow transitions: Passing control between different systems
  • Status updates: Informing users of completed tasks
  • Next steps: Guiding users on what to do after agent completion
  • Manual intervention needed: When the agent can’t proceed automatically

Key Concepts

The handoff_to_user Tool

This special tool enables agents to pause execution and request human input. When added to an agent’s toolset, it provides the ability to create interactive, collaborative workflows.
from strands_tools import handoff_to_user

agent = Agent(
    tools=[handoff_to_user],
    model=model,
    system_prompt="You are a helpful assistant that can ask for user approval.",
)

Execution Control Parameters

The handoff_to_user tool accepts two critical parameters:
ParameterTypeDescription
messagestringThe question or prompt presented to the user
breakout_of_loopbooleanControls agent behavior after user response

Execution Flow Control

breakout_of_loop=False
  • Agent pauses and waits for user input
  • After receiving input, agent continues its execution
  • Perfect for: approval requests, clarifications, additional information
Use case: “Should I proceed with this risky operation?”
response = agent.tool.handoff_to_user(
    message="I'm about to format the hard drive. Approve?",
    breakout_of_loop=False,  # Agent continues after approval
)

Response Structure

The handoff_to_user function returns a structured dictionary:
{
    "content": [{"text": "Agent's message to user"}],
    "userInput": [{"text": "User's response"}],
    "status": "SUCCESS",
    "toolUseId": "unique_reference_id"
}

Implementation

Step 1: Import Dependencies

import os
from dotenv import load_dotenv
from strands import Agent
from strands.models.litellm import LiteLLMModel
from strands_tools import handoff_to_user

# Load environment variables from a .env file
load_dotenv()

Step 2: Create Interactive Agent

def create_interactive_agent() -> Agent:
    """
    Creates an agent equipped with the handoff_to_user tool.
    
    Returns:
        An Agent instance capable of interacting with a human user.
    """
    # Configure the language model
    model = LiteLLMModel(
        client_args={"api_key": os.getenv("NEBIUS_API_KEY")},
        model_id="nebius/deepseek-ai/DeepSeek-V3-0324",
    )
    
    # Create the agent and provide the handoff_to_user tool
    interactive_agent = Agent(
        tools=[handoff_to_user],
        model=model,
        system_prompt="You are a helpful assistant that can ask for user approval.",
    )
    return interactive_agent

Step 3: Use Case 1 - Approval Request

def main():
    agent = create_interactive_agent()
    
    print("--- Demonstrating Human-in-the-Loop ---")
    
    # Case 1: Requesting approval to continue
    print("Use Case 1: Agent asks for approval and continues.")
    approval_response = agent.tool.handoff_to_user(
        message="I have a plan to format the hard drive. Is it okay to proceed? Please type 'yes' to approve or 'no' to cancel.",
        breakout_of_loop=False,  # Agent continues after user responds
    )
    print(format_handoff_summary(approval_response, "Approval Handoff"))
Safety First: Always use breakout_of_loop=False for approval requests on dangerous operations. The agent should only proceed if the user approves.

Step 4: Use Case 2 - Task Completion

    # Case 2: Completing a task and stopping
    print("\nUse Case 2: Agent completes its task and stops.")
    completion_response = agent.tool.handoff_to_user(
        message="The task has been completed successfully. I will now stop.",
        breakout_of_loop=True,  # Agent stops after handoff
    )
    print(format_handoff_summary(completion_response, "Completion Handoff"))

if __name__ == "__main__":
    main()

Helper Function: Format Response

def format_handoff_summary(response: dict | None, title: str) -> str:
    """Formats the response from a handoff_to_user call for display."""
    if not response:
        return f"--- {title}: No response ---"
    
    # Extract the text content from the agent's message
    agent_message = "No message from agent."
    if "content" in response and response["content"]:
        agent_message = response["content"][0].get("text", agent_message).strip()
    
    summary_lines = [
        f"--- {title} ---",
        f'Agent Message: "{agent_message}"',
        f"Status       : {response.get('status', 'unknown').upper()}",
        f"Reference ID : {response.get('toolUseId', 'N/A')}",
    ]
    return "\n".join(summary_lines)

Running the Example

1

Set up environment

Create a .env file:
NEBIUS_API_KEY=your_api_key_here
2

Install dependencies

pip install strands strands-tools python-dotenv
3

Run the script

python main.py
4

Interact with agent

When prompted, type your response (e.g., “yes” or “no”) and press Enter.

Expected Output

--- Demonstrating Human-in-the-Loop ---
Use Case 1: Agent asks for approval and continues.

--- Approval Handoff ---
Agent Message: "I have a plan to format the hard drive. Is it okay to proceed? 
Please type 'yes' to approve or 'no' to cancel."
Status       : SUCCESS
Reference ID : tool_abc123

Use Case 2: Agent completes its task and stops.

--- Completion Handoff ---
Agent Message: "The task has been completed successfully. I will now stop."
Status       : SUCCESS
Reference ID : tool_def456

Advanced Patterns

Conditional Logic Based on User Response

response = agent.tool.handoff_to_user(
    message="Should I delete all temporary files? (yes/no)",
    breakout_of_loop=False,
)

user_input = response.get("userInput", [{}])[0].get("text", "").lower()

if user_input == "yes":
    print("Proceeding with deletion...")
    # Perform deletion
else:
    print("Operation cancelled by user.")

Multi-Step Approval Workflow

# Step 1: Get user preference
response1 = agent.tool.handoff_to_user(
    message="Choose format: JSON or CSV?",
    breakout_of_loop=False,
)
format_choice = response1.get("userInput", [{}])[0].get("text", "JSON")

# Step 2: Confirm action
response2 = agent.tool.handoff_to_user(
    message=f"Export data as {format_choice}? (confirm/cancel)",
    breakout_of_loop=False,
)
confirmation = response2.get("userInput", [{}])[0].get("text", "").lower()

if confirmation == "confirm":
    # Proceed with export
    pass

Timeout Handling

import signal

def timeout_handler(signum, frame):
    raise TimeoutError("User response timeout")

# Set a 30-second timeout
signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(30)

try:
    response = agent.tool.handoff_to_user(
        message="Approve this action?",
        breakout_of_loop=False,
    )
    signal.alarm(0)  # Cancel the alarm
except TimeoutError:
    print("No response received. Cancelling operation.")

Try It Yourself

Create a multi-step approval process:
# Step 1: Initial plan
plan_response = agent.tool.handoff_to_user(
    message="Plan: Delete logs, restart server, run tests. Approve?",
    breakout_of_loop=False,
)

# Step 2: Confirm each step
if "yes" in plan_response.get("userInput", [{}])[0].get("text", "").lower():
    # Execute each step with individual confirmations
    pass
Use handoff to gather missing information:
response = agent.tool.handoff_to_user(
    message="What's your preferred notification method? (email/sms/push)",
    breakout_of_loop=False,
)

preference = response.get("userInput", [{}])[0].get("text", "email")
# Use preference in subsequent operations
Request human help when automated recovery fails:
try:
    # Attempt automated operation
    risky_operation()
except Exception as e:
    response = agent.tool.handoff_to_user(
        message=f"Error: {e}. Manual intervention needed. How should I proceed?",
        breakout_of_loop=False,
    )
    # Handle based on user guidance

Best Practices

Clear Messages

Write clear, specific prompts that explain what approval is needed

Provide Options

Give users clear choices (yes/no, option A/B/C)

Set Expectations

Tell users what will happen after their response

Handle Timeouts

Implement timeouts to prevent infinite waits

Log Interactions

Record handoffs for audit trails

Validate Input

Validate user responses before proceeding

What You Learned

  • How to use the handoff_to_user tool for human-in-the-loop patterns
  • The difference between approval mode and completion mode
  • How to handle user responses and implement conditional logic
  • Best practices for creating safe, interactive agent workflows
  • Real-world use cases for human oversight in AI systems

Next Steps

You’ve learned how to add human oversight to individual agents! But what about coordinating multiple agents to work together on complex tasks? In the next lesson, you’ll explore advanced multi-agent patterns.

Lesson 06: Multi-Agent Patterns

Learn how to orchestrate multiple agents for complex problem-solving

Resources

Video Tutorial

Watch Lesson 05 on YouTube

Strands Documentation

Read the official docs

Build docs developers (and LLMs) love