Skip to main content

Overview

Haggle uses xAI’s Grok LLM to automatically understand user requests and infer the type of service professional needed. This eliminates the need for users to navigate complex category menus or know exactly what type of service provider they need.
The AI task inference system is powered by Grok 3 Fast for real-time classification with minimal latency.

How It Works

1

User Submits Free-Text Query

Users describe their problem in natural language:
  • “fix my toilet”
  • “my lawn is too long”
  • “I need my house painted”
2

Grok LLM Analyzes the Request

The system sends the query to Grok 3 Fast with a specialized classification prompt
3

Service Type Returned

Grok returns a specific service type like “plumber”, “landscaper”, or “painter”
4

Clarifying Questions Generated

Based on the inferred task, Grok generates 3-5 contextual questions to gather job details

Implementation

Task Inference Endpoint

The /api/start-job endpoint handles the initial request and task classification:
@app.post("/api/start-job", response_model=StartJobResponse)
async def start_job(request: StartJobRequest):
    """
    Start a new job request.
    
    Flow:
    1. Call Grok LLM to infer the task type
    2. Generate up to 5 clarifying questions
    3. Create Job object in memory
    4. Return job_id, task, and questions
    """
    try:
        # Step 1: Infer task from query using Grok LLM
        task = await infer_task(request.query)
        
        # Step 2: Generate clarifying questions
        questions_data = await generate_clarifying_questions(
            task=task,
            query=request.query,
            zip_code=request.zip_code,
            date_needed=request.date_needed,
            price_limit=request.price_limit
        )
        
        # Convert to ClarifyingQuestion objects
        questions = [
            ClarifyingQuestion(id=q["id"], question=q["question"])
            for q in questions_data
        ]
        
        # Step 3: Create Job object (in-memory, not DB)
        job_id = str(uuid.uuid4())
        job = Job(
            id=job_id,
            original_query=request.query,
            task=task,
            house_address=request.house_address,
            zip_code=request.zip_code,
            date_needed=request.date_needed,
            price_limit=request.price_limit,
            questions=questions,
            status=JobStatus.COLLECTING_INFO
        )
        
        # Store job in memory
        jobs_store[job_id] = job
        
        return StartJobResponse(
            job_id=job_id,
            task=task,
            questions=questions
        )
        
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Error starting job: {str(e)}")

Grok LLM Service

The core inference logic uses the xAI SDK with streaming responses:
async def infer_task(query: str) -> str:
    """
    Use Grok LLM to infer the service task from a user query.
    
    Args:
        query: User's free text query (e.g., "fix my toilet")
        
    Returns:
        Inferred task type (e.g., "plumber", "electrician", "cleaner")
    """
    # Use fallback if no API key is configured
    if not XAI_API_KEY:
        print("⚠️  No XAI_API_KEY set - using fallback task inference")
        return _fallback_infer_task(query)
    
    system_prompt = """You are a service task classifier. Given a user's request, 
identify the type of service professional needed. 

Respond with ONLY a single word or short phrase for the service type, such as:
- plumber
- electrician  
- house cleaner
- painter
- handyman
- HVAC technician
- locksmith
- carpenter
- landscaper
- appliance repair
- pest control
- roofer
- moving company
- auto mechanic

Be specific but concise. Just the service type, nothing else."""

    try:
        # Initialize xAI Client
        client = Client(api_key=XAI_API_KEY)
        
        # Create Chat
        chat = client.chat.create(model="grok-3-fast")
        
        # Add messages
        chat.append(system(system_prompt))
        chat.append(user(f"What type of service professional is needed for: {query}"))
        
        # Get response
        full_response = ""
        for response, chunk in chat.stream():
            if chunk.content:
                full_response += chunk.content
        
        task = full_response.strip().lower()
        return task
        
    except Exception as e:
        print(f"Grok API exception: {e}")
        return _fallback_infer_task(query)

Clarifying Questions

After inferring the task type, Haggle generates contextual clarifying questions:
grok_llm.py:109-211
async def generate_clarifying_questions(
    task: str,
    query: str,
    zip_code: str,
    date_needed: str,
    price_limit: Union[float, str]
) -> List[Dict[str, str]]:
    """
    Generate up to 5 clarifying questions to better understand the job.
    
    Rules:
    - Max 5 questions
    - Do NOT ask for zip code, date needed, or price again
    - Keep questions minimal and necessary
    - No duplicate questions
    """
    system_prompt = """You are a service request specialist helping to understand job requirements.

Generate 3-5 clarifying questions to better understand the specific job needs.

IMPORTANT RULES:
1. Do NOT ask about location, zip code, or address - already provided
2. Do NOT ask about timing, date, or schedule - already provided  
3. Do NOT ask about budget or price - already provided
4. Keep questions specific to the actual work needed
5. Questions should help a service provider give an accurate estimate
6. Be concise - one clear question per line
7. Maximum 5 questions

Respond with ONLY the questions, one per line, numbered 1-5."""

    user_prompt = f"""Service type: {task}
User's request: "{query}"

Generate clarifying questions to understand this job better."""

    try:
        client = Client(api_key=XAI_API_KEY)
        chat = client.chat.create(model="grok-3-fast")
        
        chat.append(system(system_prompt))
        chat.append(user(user_prompt))
        
        # Get response and parse questions
        full_response = ""
        for response, chunk in chat.stream():
            if chunk.content:
                full_response += chunk.content
        
        # Parse numbered questions from response
        questions = []
        lines = content.split("\n")
        for i, line in enumerate(lines):
            # Remove numbering and extract question
            # ... parsing logic
            questions.append({
                "id": f"q{len(questions) + 1}",
                "question": line
            })
        
        return questions if questions else _fallback_questions(task)
        
    except Exception as e:
        return _fallback_questions(task)

Problem Statement Formatting

Haggle converts the user’s query into a professional problem statement for providers:
grok_llm.py:259-336
async def format_problem_statement(original_query: str, task: str) -> str:
    """
    Format a problem statement from the original query and task using Grok LLM.
    
    Returns a single line problem description in second person.
    
    Examples:
    - "my lawn is too long" -> "your lawn needs to be mowed"
    - "fix my toilet" -> "your toilet needs to be fixed"
    - "my faucet is leaking" -> "your faucet is leaking"
    """
    system_prompt = """You are a problem statement formatter. Convert the user's query into a clear, concise problem description in second person.

The output should be a single line describing the problem naturally.

Rules:
1. Convert first person to second person (e.g., "my lawn" -> "your lawn")
2. Make it clear and concise - one sentence only
3. Use natural language
4. Keep it short and direct
5. Use phrases like "needs to be fixed", "needs to be mowed", "is leaking", etc.

Respond with ONLY the problem description, nothing else. One line only."""

    client = Client(api_key=XAI_API_KEY)
    chat = client.chat.create(model="grok-3-fast")
    
    chat.append(system(system_prompt))
    chat.append(user(f"User query: {original_query}"))
    
    # Stream and return formatted statement
    full_response = ""
    for response, chunk in chat.stream():
        if chunk.content:
            full_response += chunk.content
    
    return full_response.strip()

Input

“my lawn is too long”

Output

“your lawn needs to be mowed”

Error Handling & Fallbacks

The system includes robust fallback mechanisms when the API is unavailable:
If the XAI_API_KEY is not configured or the API fails, Haggle automatically falls back to pattern matching for task inference.
Fallback Pattern Matching
# Keyword-based task classification
if any(word in query_lower for word in ["toilet", "pipe", "leak", "faucet"]):
    return "plumber"
elif any(word in query_lower for word in ["electric", "outlet", "wire"]):
    return "electrician"
# ... more patterns
else:
    return "handyman"  # Default fallback

Integration Example

1

Frontend Submits Request

const response = await fetch('/api/start-job', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    query: "fix my toilet",
    house_address: "123 Main St, San Jose, CA 95126",
    zip_code: "95126",
    price_limit: 250,
    date_needed: "2025-12-10"
  })
});

const data = await response.json();
console.log(data.task);  // "plumber"
console.log(data.questions);  // Array of 5 questions
2

Display Clarifying Questions

Show the generated questions to the user for additional context
3

Collect Answers

User answers the questions, providing detailed job information
4

Proceed to Provider Search

Submit answers via /api/complete-job to find matching providers

Benefits

Natural Language

Users describe problems in their own words without learning industry terminology

Accurate Classification

Grok LLM understands context and nuance better than keyword matching

Context-Aware

Questions are tailored to the specific service type and situation

Reliable Fallbacks

Pattern matching ensures the system works even without API access

Provider Search

Learn how Haggle finds service providers after task inference

Automated Negotiation

See how the voice agent negotiates with providers

Build docs developers (and LLMs) love