Enable agents to request additional context from users during task execution
Agents can request human input when they need additional context, clarification, or guidance to complete a task. This feature enables interactive workflows where the agent collaborates with users to gather the information needed for successful task completion.
Enable human input by setting human_input=True when defining your agent:
import asynciofrom fast_agent import FastAgentfast = FastAgent("Human Input Example")@fast.agent( instruction="An AI agent that assists with basic tasks. Request Human Input when needed.", human_input=True,)async def main(): async with fast.run() as agent: # Agent will prompt for human input when it needs clarification await agent("print the next number in the sequence")if __name__ == "__main__": asyncio.run(main())
When an agent with human_input=True encounters a task requiring additional information, it can invoke the human input tool to pause execution and request user input.
1
Agent identifies need for input
The LLM recognizes it needs more information to complete the task
2
Request is sent to user
Execution pauses and a prompt is displayed to the user
3
User provides information
The user enters the requested information
4
Agent continues execution
The agent receives the input and continues with the task
from fast_agent import FastAgentfast = FastAgent("Human Input")@fast.agent( instruction="An AI agent that assists with basic tasks. Request Human Input when needed - for example " "if being asked to predict a number sequence or pretending to take pizza orders.", human_input=True,)async def main(): async with fast.run() as agent: # This usually causes the LLM to request the Human Input Tool await agent("print the next number in the sequence")
The agent will ask the user to provide the sequence since it cannot predict numbers without context.
@fast.agent( instruction="An AI agent that assists with basic tasks. Request Human Input when needed.", human_input=True,)async def main(): async with fast.run() as agent: await agent("pretend to be a pizza restaurant and take the users order") await agent.interactive(default_prompt="STOP")
The agent will interactively collect order details by requesting human input for missing information.
@fast.agent( name="data_collector", instruction="""Collect required information from the user. Request human input for any missing fields: name, email, and preferences. Validate each input before proceeding.""", human_input=True,)async def collect_data(): async with fast.run() as agent: result = await agent("Collect user information for account creation") return result
@fast.agent( name="research", instruction="Research the topic", servers=["fetch"],)@fast.agent( name="clarify", instruction="Ask user for any clarifications needed", human_input=True,)@fast.agent( name="summarize", instruction="Create final summary",)@fast.chain( name="research_workflow", sequence=["research", "clarify", "summarize"],)async def main(): async with fast.run() as agent: await agent.research_workflow("Analyze climate change impacts")
Provide clear guidance in the agent instruction about when to request human input:
instruction="""You are a helpful assistant.Request human input when:- Information is ambiguous or missing- User confirmation is needed for important decisions- External data not accessible through tools is required"""
Specific Prompts
Guide the LLM to ask specific questions rather than vague requests:
instruction="""When requesting human input, be specific:- GOOD: 'What is your preferred delivery date? (YYYY-MM-DD)'- BAD: 'I need more information'"""
Error Handling
Consider what happens if the user cancels or provides invalid input:
instruction="""If human input is cancelled or invalid:1. Acknowledge the issue politely2. Explain what information is needed and why3. Provide options to continue or abort the task"""
Non-Blocking Design
For long-running workflows, request all needed information upfront:
instruction="""Before starting the task:1. Analyze requirements2. Request all needed information in a single prompt3. Proceed with full context"""
Combine human input with interactive mode for conversational agents:
@fast.agent( instruction="You are a helpful assistant. Request input when needed.", human_input=True,)async def main(): async with fast.run() as agent: # Start interactive session await agent.interactive()
In interactive mode:
Users can chat naturally with the agent
Agent can proactively request specific information
@fast.agent( instruction="""You are an autonomous agent. Only request human input for: - Decisions involving costs over $1000 - Changes to production systems - Ambiguous user instructions For all other tasks, proceed autonomously.""", human_input=True,)