Understand citations, internal next steps, and human review flags in generated responses
The RAG system produces structured outputs that go beyond simple text generation. Every response includes citations, actionable internal steps, and intelligent review flags to support human-in-the-loop workflows.
The output format is strictly enforced using Pydantic:
class InternalNextSteps(BaseModel): """ Structured internal actions for support workflows. Used for: - operational follow-ups Guarantees: - Ordered list of concise, actionable steps """ steps: List[str] = Field( ..., description="Actionable internal next steps, expressed as short bullet points", example=[ "Verify the user's account status", "Check recent billing transactions", ], )
The LLM is constrained to return only valid JSON matching the InternalNextSteps schema.
[ "Verify the user's account status", "Check recent billing transactions", "Confirm refund eligibility based on purchase date", "Escalate to finance team if amount exceeds $500"]
Check if answers are grounded in retrieved context:
def verify_faithfulness( answer: str, chunks: List[Dict],) -> bool: """ Verify that an answer is supported by retrieved document chunks. Returns: True if the answer is grounded in the retrieved context. """ if not chunks: return False context_text = "\n\n".join(chunk["content"] for chunk in chunks) llm = _build_llm() agent = create_agent( model=llm, response_format=Verification, system_prompt=faithfulness_prompt(context_text, answer), ) response = agent.invoke({"messages": [HumanMessage(content=answer)]}) return response["structured_response"].response
class Verification(BaseModel): """ Binary verification result used for evaluation tasks. Used by: - faithfulness checks - adversarial robustness tests The output is intentionally minimal to reduce ambiguity. """ response: Literal["Yes", "No"] = Field( ..., description="Binary verification result", example="Yes", )
Verification uses a binary Yes/No format to minimize ambiguity in LLM responses.
Structured output for categorizing documents during ingestion:
class DocumentCategory(BaseModel): """ Canonical category assigned to a support document or ticket. The value MUST match exactly one of the predefined categories. """ category: Literal[ "Account & Subscription", "Authentication & Access", "Billing & Payments", "Bugs & Errors", "Data Export & Reporting", "Feature Request", "Integrations & API", "Performance & Reliability", "Security & Compliance", ] = Field( ..., description="Single, canonical support category", example="Billing & Payments", )
Literal types ensure LLM outputs exactly match predefined categories.