Human-in-the-Loop (HITL) is a critical pattern in AI agent development that enables agents to pause execution and request human input when needed. This creates safer, more interactive, and collaborative AI systems that can work alongside humans rather than operating in complete autonomy.
This special tool enables agents to pause execution and request human input. When added to an agent’s toolset, it provides the ability to create interactive, collaborative workflows.
from strands_tools import handoff_to_useragent = Agent( tools=[handoff_to_user], model=model, system_prompt="You are a helpful assistant that can ask for user approval.",)
After receiving input, agent continues its execution
Perfect for: approval requests, clarifications, additional information
Use case: “Should I proceed with this risky operation?”
response = agent.tool.handoff_to_user( message="I'm about to format the hard drive. Approve?", breakout_of_loop=False, # Agent continues after approval)
breakout_of_loop=True
Agent pauses and waits for user input
After receiving input, agent stops its execution
Perfect for: task completion, handoffs, status updates
Use case: “Task completed. Here’s the result.”
response = agent.tool.handoff_to_user( message="Task completed. Review the results.", breakout_of_loop=True, # Agent stops after handoff)
def create_interactive_agent() -> Agent: """ Creates an agent equipped with the handoff_to_user tool. Returns: An Agent instance capable of interacting with a human user. """ # Configure the language model model = LiteLLMModel( client_args={"api_key": os.getenv("NEBIUS_API_KEY")}, model_id="nebius/deepseek-ai/DeepSeek-V3-0324", ) # Create the agent and provide the handoff_to_user tool interactive_agent = Agent( tools=[handoff_to_user], model=model, system_prompt="You are a helpful assistant that can ask for user approval.", ) return interactive_agent
def main(): agent = create_interactive_agent() print("--- Demonstrating Human-in-the-Loop ---") # Case 1: Requesting approval to continue print("Use Case 1: Agent asks for approval and continues.") approval_response = agent.tool.handoff_to_user( message="I have a plan to format the hard drive. Is it okay to proceed? Please type 'yes' to approve or 'no' to cancel.", breakout_of_loop=False, # Agent continues after user responds ) print(format_handoff_summary(approval_response, "Approval Handoff"))
Safety First: Always use breakout_of_loop=False for approval requests on dangerous operations. The agent should only proceed if the user approves.
# Case 2: Completing a task and stopping print("\nUse Case 2: Agent completes its task and stops.") completion_response = agent.tool.handoff_to_user( message="The task has been completed successfully. I will now stop.", breakout_of_loop=True, # Agent stops after handoff ) print(format_handoff_summary(completion_response, "Completion Handoff"))if __name__ == "__main__": main()
def format_handoff_summary(response: dict | None, title: str) -> str: """Formats the response from a handoff_to_user call for display.""" if not response: return f"--- {title}: No response ---" # Extract the text content from the agent's message agent_message = "No message from agent." if "content" in response and response["content"]: agent_message = response["content"][0].get("text", agent_message).strip() summary_lines = [ f"--- {title} ---", f'Agent Message: "{agent_message}"', f"Status : {response.get('status', 'unknown').upper()}", f"Reference ID : {response.get('toolUseId', 'N/A')}", ] return "\n".join(summary_lines)
--- Demonstrating Human-in-the-Loop ---Use Case 1: Agent asks for approval and continues.--- Approval Handoff ---Agent Message: "I have a plan to format the hard drive. Is it okay to proceed? Please type 'yes' to approve or 'no' to cancel."Status : SUCCESSReference ID : tool_abc123Use Case 2: Agent completes its task and stops.--- Completion Handoff ---Agent Message: "The task has been completed successfully. I will now stop."Status : SUCCESSReference ID : tool_def456
# Step 1: Initial planplan_response = agent.tool.handoff_to_user( message="Plan: Delete logs, restart server, run tests. Approve?", breakout_of_loop=False,)# Step 2: Confirm each stepif "yes" in plan_response.get("userInput", [{}])[0].get("text", "").lower(): # Execute each step with individual confirmations pass
Experiment 2: Information Gathering
Use handoff to gather missing information:
response = agent.tool.handoff_to_user( message="What's your preferred notification method? (email/sms/push)", breakout_of_loop=False,)preference = response.get("userInput", [{}])[0].get("text", "email")# Use preference in subsequent operations
Experiment 3: Error Recovery
Request human help when automated recovery fails:
try: # Attempt automated operation risky_operation()except Exception as e: response = agent.tool.handoff_to_user( message=f"Error: {e}. Manual intervention needed. How should I proceed?", breakout_of_loop=False, ) # Handle based on user guidance
You’ve learned how to add human oversight to individual agents! But what about coordinating multiple agents to work together on complex tasks? In the next lesson, you’ll explore advanced multi-agent patterns.
Lesson 06: Multi-Agent Patterns
Learn how to orchestrate multiple agents for complex problem-solving