Overview
Metadata allows you to attach custom key-value pairs to your runs for filtering, grouping, and analysis in the Observatory dashboard. This is useful for tracking:
- Agent versions and configurations
- User or tenant identifiers
- Feature flags
- Environment information
- Business context
- Performance tags
Metadata can be added to runs using the .metadata() method. All metadata values must be strings.
Builder Pattern
import { run } from "@contextcompany/custom";
const r = run();
r.prompt("What's the weather?");
r.metadata({
agent: "weather-bot",
version: "1.2.0",
userId: "user_123",
environment: "production"
});
r.response("It's 72°F and sunny.");
await r.end();
Factory Pattern
import { sendRun } from "@contextcompany/custom";
await sendRun({
prompt: { user_prompt: "What's the weather?" },
response: "72°F and sunny",
startTime: new Date(),
endTime: new Date(),
metadata: {
agent: "weather-bot",
version: "1.2.0",
userId: "user_123"
}
});
Multiple calls to .metadata() are merged together:
const r = run();
r.prompt("test");
// Add initial metadata
r.metadata({ agent: "bot", version: "1.0" });
// Add more metadata later
r.metadata({ userId: "user_123" });
// Result: { agent: "bot", version: "1.0", userId: "user_123" }
await r.end();
Later calls to .metadata() will override values from earlier calls if the same keys are used.
r.metadata({ version: "1.0" });
r.metadata({ version: "1.1" }); // Overrides to "1.1"
Agent Identification
Track which agent or model configuration was used:
r.metadata({
agent: "customer-support-bot",
version: "2.1.0",
model: "gpt-4o",
temperature: "0.7"
});
User Context
Associate runs with specific users or tenants:
r.metadata({
userId: "user_abc123",
tenantId: "acme-corp",
userTier: "enterprise",
userRegion: "us-west"
});
r.metadata({
environment: process.env.NODE_ENV || "development",
deployment: process.env.VERCEL_ENV || "local",
region: process.env.AWS_REGION || "unknown"
});
Feature Flags
Track which features are enabled:
r.metadata({
feature_rag: "enabled",
feature_streaming: "disabled",
feature_multimodal: "enabled"
});
Business Context
Capture domain-specific information:
// E-commerce
r.metadata({
orderId: "order_456",
productCategory: "electronics",
orderValue: "299.99"
});
// Customer support
r.metadata({
ticketId: "TICKET-789",
priority: "high",
category: "billing"
});
// Healthcare
r.metadata({
patientId: "anon_123", // Use anonymized IDs
appointmentType: "consultation",
specialty: "cardiology"
});
r.metadata({
cacheHit: "true",
latencyBucket: "fast",
retryAttempt: "0"
});
Use Consistent Keys
Establish a consistent naming convention across your organization to make filtering and analysis easier.
// Good: Consistent naming
r.metadata({ agent_name: "bot", agent_version: "1.0" });
// Less ideal: Inconsistent naming
r.metadata({ agentName: "bot", agent_v: "1.0" });
Keep Values as Strings
Metadata values must be strings. Convert other types explicitly:
// Good
r.metadata({
count: String(10),
enabled: String(true),
price: String(99.99)
});
// Bad: Will cause errors
r.metadata({
count: 10, // TypeError: number
enabled: true, // TypeError: boolean
price: 99.99 // TypeError: number
});
Never include sensitive data in metadata such as passwords, API keys, credit card numbers, or personally identifiable information (PII).
// Bad: Exposes sensitive data
r.metadata({
password: "secret123",
creditCard: "4111-1111-1111-1111",
ssn: "123-45-6789"
});
// Good: Use anonymized or hashed identifiers
r.metadata({
userIdHash: "a1b2c3d4",
transactionRef: "TXN-456"
});
Design metadata keys with dashboard filtering in mind:
// These keys work well for filtering
r.metadata({
environment: "production", // Filter: production vs staging
status: "success", // Filter: success vs error
category: "customer-support", // Group by category
priority: "high" // Filter by priority
});
Include only the metadata you’ll actually use for filtering or analysis:
// Good: Focused, useful metadata
r.metadata({
agent: "bot-v2",
userId: "user_123",
environment: "prod"
});
// Bad: Too much metadata
r.metadata({
agent: "bot-v2",
userId: "user_123",
userAgent: "Mozilla/5.0...",
ipAddress: "192.168.1.1",
requestHeaders: "{...}",
// ... 20 more fields
});
You can add metadata dynamically based on runtime conditions:
import { run } from "@contextcompany/custom";
const r = run();
r.prompt(userMessage);
// Base metadata
r.metadata({
agent: "assistant",
version: "2.0"
});
try {
const result = await agent.execute(userMessage);
// Add success metadata
r.metadata({
status: "success",
resultType: result.type
});
r.response(result.text);
await r.end();
} catch (e) {
// Add error metadata
r.metadata({
status: "error",
errorType: e.constructor.name,
errorCode: e.code || "unknown"
});
await r.error(String(e));
}
Currently, metadata can only be attached to runs, not individual steps or tool calls. If you need to track step-level or tool-level metadata, encode it in the prompt/response or use structured naming.
Workaround for step-level context:
const s = r.step();
// Include metadata in the step prompt
const stepMetadata = { stepType: "planning", phase: "1" };
s.prompt(JSON.stringify({
metadata: stepMetadata,
messages: [...]
}));
s.response(content);
s.end();
Workaround for tool call context:
const tc = r.toolCall("search");
// Include context in tool args
tc.args({
query: "...",
_context: {
caller: "research-agent",
phase: "discovery"
}
});
tc.result(searchResults);
tc.end();
Examples
Multi-tenant Application
import { run } from "@contextcompany/custom";
interface TenantContext {
tenantId: string;
plan: string;
region: string;
}
class MultiTenantAgent {
private context: TenantContext;
constructor(context: TenantContext) {
this.context = context;
}
async process(userMessage: string) {
const r = run();
r.prompt(userMessage);
// Add tenant metadata to every run
r.metadata({
tenantId: this.context.tenantId,
plan: this.context.plan,
region: this.context.region,
agent: "multi-tenant-bot",
version: "3.0"
});
try {
const response = await this.executeAgent(userMessage);
r.response(response);
await r.end();
return response;
} catch (e) {
r.metadata({ errorCaught: "true" });
await r.error(String(e));
throw e;
}
}
private async executeAgent(message: string): Promise<string> {
// Your agent logic
return "Response";
}
}
A/B Testing
import { run } from "@contextcompany/custom";
function getExperimentVariant(userId: string): string {
// Hash user ID to assign variant
return Math.random() > 0.5 ? "control" : "treatment";
}
async function processWithExperiment(userId: string, message: string) {
const variant = getExperimentVariant(userId);
const r = run();
r.prompt(message);
r.metadata({
experiment: "prompt-v2-test",
variant: variant,
userId: userId
});
// Use different logic based on variant
const response = variant === "treatment"
? await newPromptStrategy(message)
: await oldPromptStrategy(message);
r.response(response);
await r.end();
return response;
}
Request Tracing
import { run } from "@contextcompany/custom";
import { randomUUID } from "crypto";
class TracedAgent {
async processRequest(req: Request) {
const traceId = req.headers.get("x-trace-id") || randomUUID();
const spanId = randomUUID();
const r = run();
r.prompt(await req.text());
// Add distributed tracing metadata
r.metadata({
traceId: traceId,
spanId: spanId,
parentSpanId: req.headers.get("x-parent-span-id") || "none",
service: "agent-api",
endpoint: req.url
});
try {
const response = await this.execute();
r.response(response);
await r.end();
return new Response(response, {
headers: {
"x-trace-id": traceId,
"x-span-id": spanId
}
});
} catch (e) {
await r.error(String(e));
throw e;
}
}
private async execute(): Promise<string> {
return "Result";
}
}