Overview
ThesendFeedback() function allows you to provide feedback on LLM completions, enabling ZeroEval to optimize your prompts. Positive feedback indicates the output was good, while negative feedback indicates it needs improvement.
Function Signature
Parameters
Configuration object for the feedback request
Return Value
Returns aPromise<PromptFeedbackResponse> containing the created feedback record:
Usage Examples
Basic Positive Feedback
Negative Feedback with Expected Output
Feedback with Custom Metadata
Feedback in a Complete Workflow
Judge Feedback with Expected Score
Collecting User Feedback
Error Handling
The function throws aPromptRequestError if the feedback request fails:
- 404 Not Found: Completion ID or prompt slug doesn’t exist
- 401 Unauthorized: Invalid or missing API key
- 400 Bad Request: Invalid parameters (e.g., missing required fields)
- 500 Server Error: Backend service error
Best Practices
Capture Span ID Early
Capture the span ID immediately after the completion:Provide Context with Reason
Always include areason to help guide optimization:
Include Expected Output for Negative Feedback
For negative feedback, provideexpectedOutput to guide improvements:
Use Metadata for Analytics
Track custom metrics with metadata:Handle Missing Span Gracefully
Always check if a span exists before sending feedback:Integration with Prompt Optimization
Feedback you provide throughsendFeedback() is used by ZeroEval to:
- Identify problematic completions: Negative feedback highlights areas for improvement
- Train optimization models: Feedback guides automatic prompt tuning
- Track performance trends: Monitor how prompt changes affect quality
- Prioritize optimizations: Focus on prompts with the most negative feedback
prompt() in auto-optimization mode, ZeroEval automatically serves improved versions based on the feedback collected.
See Also
- prompt() - Fetch and manage versioned prompts
- getCurrentSpan() - Get the current span for feedback
- Prompt Management Guide - Learn about prompt optimization workflows