Skip to main content

Common Issues

Data Not Appearing in Dashboard

Symptom: No data appears in dashboard, and you see:
[TCC] No API key found. Set TCC_API_KEY or call configure({ apiKey }).
Solution:Set your API key using environment variables:
TCC_API_KEY=tcc_abc123
Or programmatically:
import { configure } from "@contextcompany/custom";

configure({ apiKey: "tcc_abc123" });
Symptom: Code executes without errors, but no data appears in dashboard.Solution:Ensure you call .end() or .error() on every run:
const r = run();
r.prompt("test");
await r.end(); // Required!
Enable debug mode to verify:
configure({ debug: true });
You should see: [TCC] Payload sent successfully
Symptom: Error when ending run:
[TCC] 2 step(s) not ended. Call .end() on all steps before ending the run.
Solution:All steps and tool calls must be ended before ending the run:
const r = run();
r.prompt("test");

const s = r.step();
s.prompt("...");
s.response("...");
s.end(); // Required!

const tc = r.toolCall("search");
tc.args({ query: "..." });
tc.result({ ... });
tc.end(); // Required!

await r.end();
Symptom: Error when ending run:
[TCC] Run requires a prompt. Call .prompt() before .end()
Solution:Always call .prompt() before ending:
const r = run();
r.prompt("What's the weather?"); // Required!
await r.end();
Symptom: Error in console:
[TCC] Ingestion failed (401): Unauthorized
[TCC] Ingestion failed (403): Invalid API key
Solution:
  1. Verify your API key is correct
  2. Check that the key matches your environment (dev vs production)
  3. Ensure the API key hasn’t been revoked
// Development key (routes to dev environment)
configure({ apiKey: "dev_abc123" });

// Production key (routes to production)
configure({ apiKey: "tcc_abc123" });
Symptom: Data appears in development dashboard but not production (or vice versa).Solution:API keys automatically route to environments:
  • Keys starting with dev_ → Development
  • All other keys → Production
Ensure you’re using the correct API key for your environment.

Debugging

Enable Debug Mode

Debug mode outputs detailed logs to help troubleshoot issues:
import { configure } from "@contextcompany/custom";

configure({ debug: true });
Or via environment variable:
TCC_DEBUG=true

Understanding Debug Output

[TCC] Run created { "runId": "550e8400-..." }
[TCC] Step created { "stepId": "...", "runId": "..." }
[TCC] Run ended { "runId": "..." }
[TCC] Sending payload to https://api.thecontext.company/v1/custom
[TCC] Payload: { "type": "batch", "items": [...] }
[TCC] Payload sent successfully
1

Verify creation

Look for Run created, Step created, ToolCall created messages.
2

Verify ending

Look for Run ended, Step ended, ToolCall ended messages.
3

Verify sending

Look for Sending payload and Payload sent successfully.
4

Check for errors

Look for Ingestion failed or network errors.

Network Inspection

Inspect the actual HTTP request:
// Enable debug mode
configure({ debug: true });

const r = run();
r.prompt("test");
await r.end();

// Check console for:
// - Request URL
// - Payload structure
// - Response status

Error Messages

Run Errors

Run already ended
You called .end() or .error() multiple times on the same run.
const r = run();
r.prompt("test");
await r.end();
await r.end(); // Error: Run already ended
Solution: Ensure you only end each run once.
Run requires a prompt
You called .end() without first calling .prompt().
const r = run();
await r.end(); // Error: Run requires a prompt
Solution: Always set a prompt before ending.
X step(s) not ended
You created steps but didn’t call .end() on them.
const r = run();
r.prompt("test");
const s = r.step();
s.prompt("...");
s.response("...");
// Missing s.end()!
await r.end(); // Error: 1 step(s) not ended
Solution: Call .end() on all steps before ending the run.
X tool call(s) not ended
You created tool calls but didn’t call .end() on them.Solution: Call .end() on all tool calls before ending the run.

Step Errors

Step already ended
You called .end() or .error() multiple times on the same step.Solution: Ensure you only end each step once.
Step requires a prompt
You called .end() without first calling .prompt().
const s = r.step();
s.response("...");
s.end(); // Error: Step requires a prompt
Solution: Set both prompt and response before ending.
Step requires a response
You called .end() without first calling .response().Solution: Set both prompt and response before ending.

Tool Call Errors

ToolCall already ended
You called .end() or .error() multiple times on the same tool call.Solution: Ensure you only end each tool call once.
ToolCall requires a name
You called .end() without setting a tool name.
const tc = r.toolCall();
tc.args({ ... });
tc.end(); // Error: ToolCall requires a name
Solution: Set the name in the constructor or via .name():
// Option 1: Constructor
const tc = r.toolCall("search");

// Option 2: Method
const tc = r.toolCall();
tc.name("search");

Performance Issues

High Latency

Observatory uses async requests with automatic retries. This should not block your application.If you’re experiencing high latency:
  1. Check your network connection to api.thecontext.company
  2. Verify you’re not in a region with high latency to the API
  3. Consider using a custom TCC_URL endpoint closer to your infrastructure
Very large prompts or responses can slow down transmission.Solution:Truncate extremely large content:
const MAX_LENGTH = 50000; // 50KB

const s = r.step();
s.prompt(prompt.slice(0, MAX_LENGTH));
s.response(response.slice(0, MAX_LENGTH));
s.end();

Memory Leaks

If you create runs but never end them, they’ll accumulate in memory.Solution:Always end or error runs:
async function safeRun(prompt: string) {
  const r = run();
  r.prompt(prompt);
  
  try {
    const result = await agent.execute(prompt);
    r.response(result);
    await r.end();
    return result;
  } catch (e) {
    await r.error(String(e));
    throw e;
  }
}
Enable auto-flush timeout as a safety net:
configure({ runTimeout: 600000 }); // 10 min

Integration Issues

Next.js

Symptom: No telemetry data captured in Next.js app.Solution:
  1. Ensure instrumentation.ts is in the root of your project (same level as app/ or pages/)
  2. Enable instrumentation in next.config.js:
next.config.js
module.exports = {
  experimental: {
    instrumentationHook: true,
  },
};
  1. Restart your dev server
Symptom: Errors when using Observatory in React Server Components.Solution:Observatory’s OpenTelemetry integration works automatically in Server Components. For client components, use the widget package.

Vercel

Solution:Add TCC_API_KEY to your Vercel project settings:
  1. Go to Project Settings → Environment Variables
  2. Add TCC_API_KEY with your API key
  3. Redeploy

TypeScript

Symptom: TypeScript compilation errors.Solution:
  1. Ensure you’re using TypeScript 5.0 or later
  2. Check that types are installed:
pnpm add -D @types/node
  1. Verify your tsconfig.json includes:
{
  "compilerOptions": {
    "moduleResolution": "bundler",
    "module": "ESNext",
    "target": "ES2022"
  }
}

FAQ

Use a local ingestion endpoint:
configure({ 
  apiKey: "dev_test123",
  url: "http://localhost:8787/custom",
  debug: true 
});
Or just enable debug mode to see payloads in console:
configure({ debug: true });
Yes! Observatory is designed for production use. The SDK:
  • Uses async requests that don’t block your application
  • Includes automatic retries for transient failures
  • Gracefully handles missing API keys (logs warning and continues)
  • Has minimal performance overhead
The SDK will:
  1. Retry the request up to 2 times with exponential backoff
  2. Log an error to console after retries are exhausted
  3. Continue running your application normally
Your application will not crash or hang if Observatory is unavailable.
Runs have an auto-flush timeout that defaults to 20 minutes:
// Per-run timeout
const r = run({ timeout: 600000 }); // 10 min

// Global timeout
configure({ runTimeout: 600000 });

// Disable timeout
const r = run({ timeout: 0 });
If a run exceeds the timeout, it’s automatically sent with error status.
Once data is sent to Observatory, it cannot be modified via the SDK. Contact support if you need to delete or modify historical data.
For streaming responses, collect the full response before ending the step:
const r = run();
r.prompt("Generate a story");

const s = r.step();
s.prompt(JSON.stringify(messages));

let fullResponse = "";
for await (const chunk of stream) {
  fullResponse += chunk;
  // Stream to user
  yield chunk;
}

s.response(fullResponse);
s.end();

r.response(fullResponse);
await r.end();
Each run is independent and can be processed concurrently:
const runs = await Promise.all(
  prompts.map(async (prompt) => {
    const r = run();
    r.prompt(prompt);
    const response = await agent.execute(prompt);
    r.response(response);
    await r.end();
    return response;
  })
);

Getting Help

If you’re still experiencing issues:

GitHub Issues

Report bugs or request features

Documentation

Browse the full documentation

Discord Community

Get help from the community

Email Support

Contact the support team
When reporting issues, include:
  1. Debug logs (enable with debug: true)
  2. SDK version
  3. Runtime environment (Node.js version, framework, etc.)
  4. Minimal code reproduction

Build docs developers (and LLMs) love