Overview
The Grok LLM service leverages xAI’s Grok-3-fast model to provide intelligent task classification, clarifying question generation, and price extraction from call transcripts. This service is central to Haggle’s ability to understand user requests and facilitate automated negotiations. Source:services/grok_llm.py
API Functions
infer_task()
Classifies user queries into specific service task types.User’s free text query describing their service need (e.g., “fix my toilet”, “my lawn is too long”)
Inferred service task type such as “plumber”, “electrician”, “house cleaner”, “painter”, “handyman”, “HVAC technician”, “locksmith”, “carpenter”, “landscaper”, “appliance repair”, “pest control”, “roofer”, “moving company”, or “auto mechanic”
- Sends query to Grok-3-fast with a specialized system prompt
- Uses streaming response to collect the classification
- Falls back to keyword-based classification if API is unavailable
- Returns normalized, lowercase task type
generate_clarifying_questions()
Generates 3-5 context-specific questions to understand job requirements better.The inferred task type (e.g., “plumber”, “electrician”)
Original user query
User’s zip code (already collected, won’t be asked again)
When service is needed (already collected, won’t be asked again)
Maximum price user is willing to pay (already collected, won’t be asked again)
List of question objects, each containing:
id: Unique question identifier (e.g., “q1”, “q2”)question: The clarifying question text
- Never asks about location, date, or budget (already provided)
- Generates task-specific questions that help providers give accurate estimates
- Automatically parses numbered responses and cleans formatting
- Falls back to predefined questions if API is unavailable
format_problem_statement()
Converts first-person user queries into second-person problem descriptions.The user’s original query in first person (e.g., “my lawn is too long”)
The inferred task type (e.g., “landscaper”, “plumber”)
Formatted problem statement in second person, suitable for provider communication (e.g., “your lawn needs to be mowed”)
- Converts first person to second person (“my” → “your”)
- Single sentence output only
- Removes quotes and trailing periods
- Natural, conversational language
- Uses action phrases like “needs to be fixed”, “is leaking”, etc.
extract_negotiated_price()
Extracts the final agreed-upon price from a call transcript using LLM analysis.List of transcript entries, where each entry contains:
role: Either “user” or “assistant”text: The spoken text for that turn
The negotiated price as a float, or
None if no price was agreed upon- Formats transcript with
[USER]and[ASSISTANT]prefixes - Sends to Grok with instructions to find the final agreed price
- Looks for the last price mentioned that both parties accepted
- Parses numeric values from various formats (“$125”, “one hundred twenty five dollars”, etc.)
- Returns
Noneif no agreement was reached - Falls back to regex-based extraction if API unavailable
$XXXformat:$125,$125.50- “XXX dollars” format:
125 dollars - “agreed on XXX”
- “XXX for the job”
- “price is XXX”
Configuration
Environment Variables
Model Configuration
All functions use the grok-3-fast model with streaming responses:Fallback Behavior
All functions include fallback logic when the API is unavailable: Task Inference Fallback: Keyword matching against common terms:- “toilet”, “pipe”, “leak” → plumber
- “electric”, “outlet”, “wire” → electrician
- “clean”, “maid” → house cleaner
- “paint”, “wall” → painter
- Defaults to “handyman” if no match
Error Handling
All functions gracefully handle API errors:- Missing or invalid API key → Uses fallback
- Network timeout → Uses fallback
- Rate limiting → Uses fallback
- Invalid response format → Uses fallback
Integration Example
Complete workflow using multiple Grok LLM functions:See Also
- Grok Search Service - Find service providers
- Voice Agent - Automated negotiation calls
- xAI SDK Documentation - Official Grok API docs