Skip to main content

Overview

Observatory’s feedback API allows you to collect user feedback (ratings and comments) and link it to specific AI agent runs. This helps you:
  • Track response quality over time
  • Identify problematic interactions
  • Improve your AI agent based on real user feedback
  • Build feedback loops into your application
Feedback is linked to runs using the unique runId. This allows you to correlate user satisfaction with specific inputs, outputs, and execution traces.

Quick Start

1

Import the feedback function

import { submitFeedback } from "@contextcompany/otel";
2

Submit feedback

await submitFeedback({
  runId: "run-123",
  score: "thumbs_up",
  text: "Great response!"
});

API Reference

submitFeedback(params)

Submit user feedback for a specific agent run.
function submitFeedback(params: {
  runId: string;
  score?: "thumbs_up" | "thumbs_down";
  text?: string;
}): Promise<Response | undefined>

Parameters

runId
string
required
The unique identifier for the agent run. This is typically stored in your UI when the agent response is generated.
score
'thumbs_up' | 'thumbs_down'
A binary rating for the response. Use thumbs_up for positive feedback and thumbs_down for negative feedback.
text
string
Free-form text comment from the user. Limited to 2000 characters.
At least one of score or text must be provided. You can submit both together or separately.

Returns

  • Success: Returns the Response object from the fetch request
  • Validation Error: Returns undefined and logs an error to console
  • Network Error: Returns undefined and logs an error to console

Example

// Submit only a score
await submitFeedback({
  runId: "run-abc-123",
  score: "thumbs_up"
});

// Submit only text
await submitFeedback({
  runId: "run-abc-123",
  text: "The response was accurate but too verbose."
});

// Submit both score and text
await submitFeedback({
  runId: "run-abc-123",
  score: "thumbs_down",
  text: "The agent misunderstood my question."
});

Implementation Examples

Next.js API Route

Create a feedback endpoint in your Next.js app:
app/api/feedback/route.ts
import { submitFeedback } from "@contextcompany/otel";
import { NextRequest, NextResponse } from "next/server";

export async function POST(request: NextRequest) {
  try {
    const body = await request.json();
    const { runId, score, comment } = body;

    if (!runId || typeof runId !== "string") {
      return NextResponse.json(
        { error: "Missing or invalid runId" },
        { status: 400 }
      );
    }

    if (score && !["thumbs_up", "thumbs_down"].includes(score)) {
      return NextResponse.json(
        { error: "Invalid score. Must be 'thumbs_up' or 'thumbs_down'" },
        { status: 400 }
      );
    }

    // Submit feedback to Observatory
    await submitFeedback({
      runId,
      score: score || undefined,
      text: comment || undefined,
    });

    return NextResponse.json({ success: true });
  } catch (error) {
    console.error("Error submitting feedback:", error);
    return NextResponse.json(
      { error: "Failed to submit feedback" },
      { status: 500 }
    );
  }
}

React Component

Build a feedback UI component:
components/feedback-buttons.tsx
"use client";

import { useState } from "react";

type FeedbackButtonsProps = {
  runId: string;
};

export function FeedbackButtons({ runId }: FeedbackButtonsProps) {
  const [selectedScore, setSelectedScore] = useState<
    "thumbs_up" | "thumbs_down" | null
  >(null);
  const [isSubmitting, setIsSubmitting] = useState(false);

  const handleFeedback = async (score: "thumbs_up" | "thumbs_down") => {
    if (isSubmitting) return;

    setIsSubmitting(true);
    setSelectedScore(score);

    try {
      const response = await fetch("/api/feedback", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ runId, score }),
      });

      if (!response.ok) {
        throw new Error("Failed to submit feedback");
      }
    } catch (error) {
      console.error("Error submitting feedback:", error);
      setSelectedScore(null);
    } finally {
      setIsSubmitting(false);
    }
  };

  return (
    <div className="flex gap-2 mt-2">
      <button
        onClick={() => handleFeedback("thumbs_up")}
        disabled={isSubmitting}
        className={selectedScore === "thumbs_up" ? "text-green-600" : ""}
      >
        👍
      </button>
      <button
        onClick={() => handleFeedback("thumbs_down")}
        disabled={isSubmitting}
        className={selectedScore === "thumbs_down" ? "text-red-600" : ""}
      >
        👎
      </button>
    </div>
  );
}

Usage in Chat Interface

app/page.tsx
import { FeedbackButtons } from "@/components/feedback-buttons";

export default function ChatPage() {
  return (
    <div>
      {messages.map((message) => (
        <div key={message.id}>
          <p>{message.content}</p>
          {message.role === "assistant" && message.runId && (
            <FeedbackButtons runId={message.runId} />
          )}
        </div>
      ))}
    </div>
  );
}

Getting the Run ID

The runId is automatically added to spans by Observatory. Here’s how to access it:

From AI SDK (Vercel)

The AI SDK includes telemetry metadata in responses:
import { generateText } from "ai";

const result = await generateText({
  model: openai("gpt-4"),
  prompt: "Hello!",
});

// Access telemetry metadata
const runId = result.experimental_telemetry?.metadata?.runId;

From OpenTelemetry Context

You can also extract it from the active span:
import { trace } from "@opentelemetry/api";

const span = trace.getActiveSpan();
const runId = span?.attributes["tcc.runId"];

Store in Your Database

For persistent feedback collection, store the runId with your conversation:
// When saving the agent response
await db.messages.create({
  conversationId: "conv-123",
  content: result.text,
  runId: result.experimental_telemetry?.metadata?.runId,
});

// Later, when user submits feedback
const message = await db.messages.findById(messageId);
await submitFeedback({
  runId: message.runId,
  score: userScore,
});

Configuration

Environment Variables

TCC_API_KEY
string
required
Your Observatory API key. The feedback function requires this to authenticate requests.
TCC_API_KEY=tcc_your_api_key_here
TCC_FEEDBACK_URL
string
Custom feedback endpoint URL. Defaults to https://api.thecontext.company/v1/feedback.
TCC_FEEDBACK_URL=https://api.example.com/feedback

Automatic Environment Detection

The feedback function automatically detects your environment:
  • API keys starting with dev_ use the development endpoint
  • Production API keys use the production endpoint
  • Custom endpoints override automatic detection

Validation

The submitFeedback function validates input before sending:

Required Fields

// ❌ Error: at least one of score or text required
await submitFeedback({ runId: "run-123" });

// ✅ Valid: score provided
await submitFeedback({ runId: "run-123", score: "thumbs_up" });

// ✅ Valid: text provided
await submitFeedback({ runId: "run-123", text: "Good!" });

Text Length Limit

// ❌ Error: text exceeds 2000 characters
await submitFeedback({
  runId: "run-123",
  text: "A".repeat(2001),
});

// ✅ Valid: text within limit
await submitFeedback({
  runId: "run-123",
  text: "This is a reasonable length comment.",
});

Error Messages

Validation errors are logged to console:
[TCC] Cannot submit feedback: at least one of 'score' or 'text' must be provided
[TCC] Cannot submit feedback: text length (2500) exceeds maximum of 2000 characters
[TCC] Cannot submit feedback: TCC_API_KEY environment variable is not set

Error Handling

Network Errors

try {
  const response = await submitFeedback({
    runId: "run-123",
    score: "thumbs_up",
  });

  if (response?.ok) {
    console.log("Feedback submitted successfully");
  } else {
    console.error("Feedback submission failed");
  }
} catch (error) {
  console.error("Network error:", error);
}

Graceful Degradation

// Don't block your UI on feedback submission
const handleFeedback = async (score: string) => {
  // Submit in background, don't await
  submitFeedback({ runId, score }).catch((error) => {
    // Log error but don't show to user
    console.error("Failed to submit feedback:", error);
  });

  // Update UI immediately
  setSelectedScore(score);
};

Advanced Patterns

Feedback with Comments Modal

Allow users to add detailed comments:
const [showCommentModal, setShowCommentModal] = useState(false);
const [comment, setComment] = useState("");

const handleCommentSubmit = async () => {
  await submitFeedback({
    runId,
    score: selectedScore || undefined,
    text: comment,
  });
  setShowCommentModal(false);
};

return (
  <>
    <button onClick={() => handleFeedback("thumbs_up")}>👍</button>
    <button onClick={() => handleFeedback("thumbs_down")}>👎</button>
    <button onClick={() => setShowCommentModal(true)}>💬 Comment</button>
    
    {showCommentModal && (
      <Modal>
        <textarea
          value={comment}
          onChange={(e) => setComment(e.target.value)}
          placeholder="Share your feedback..."
        />
        <button onClick={handleCommentSubmit}>Submit</button>
      </Modal>
    )}
  </>
);

Batch Feedback Collection

Collect feedback for multiple messages:
const submitBatchFeedback = async (
  feedbackItems: Array<{
    runId: string;
    score?: "thumbs_up" | "thumbs_down";
    text?: string;
  }>
) => {
  await Promise.all(
    feedbackItems.map((item) => submitFeedback(item))
  );
};

Analytics Integration

Track feedback in your analytics:
const handleFeedback = async (score: string) => {
  // Submit to Observatory
  await submitFeedback({ runId, score });

  // Also track in your analytics
  analytics.track("feedback_submitted", {
    runId,
    score,
    timestamp: Date.now(),
  });
};

Best Practices

Store the runId when the agent response is generated:
const result = await generateText({ /* ... */ });
const runId = result.experimental_telemetry?.metadata?.runId;
// Store runId with the message for later feedback submission
Don’t let feedback submission errors break your UI:
submitFeedback({ runId, score }).catch(console.error);
Add validation in both your UI and API route:
// Client: disable invalid states
<button disabled={!runId || isSubmitting}>

// Server: validate parameters
if (!runId || !score) return error();
Show users their feedback was received:
setSelectedScore(score);
toast.success("Thanks for your feedback!");

Troubleshooting

Feedback not appearing in dashboard

  1. Verify TCC_API_KEY is set
  2. Check the runId is valid (not null/undefined)
  3. Ensure you’re sending to the correct environment (dev vs prod)
  4. Look for error messages in server logs

Invalid runId errors

  • Make sure you’re extracting the runId from the correct location
  • Verify the agent run completed successfully
  • Check that instrumentation is enabled

Rate limiting

Observatory may rate limit excessive feedback submissions:
  • Implement client-side debouncing
  • Cache feedback submissions and retry on failure
  • Contact support if you need higher limits

Next Steps

OpenTelemetry

Learn how traces and spans work

TypeScript API

Full TypeScript API documentation

Build docs developers (and LLMs) love