Overview
The run() function creates a new run to track the execution of your AI agent. A run represents a single interaction or task execution, containing the prompt, response, metadata, and child steps or tool calls.
Function Signature
from contextcompany import run
run(
run_id: Optional[str] = None,
session_id: Optional[str] = None,
conversational: Optional[bool] = None,
api_key: Optional[str] = None,
tcc_url: Optional[str] = None,
) -> Run
Parameters
Unique identifier for this run. If not provided, a UUID will be automatically generated.
Session identifier to group multiple runs together (e.g., a conversation thread).
Whether this run is part of a conversational flow. Set to True for chat-based interactions.
Observatory API key. If not provided, uses the TCC_API_KEY environment variable.
Custom Observatory endpoint URL. If not provided, uses the TCC_URL environment variable or defaults to production.
Returns
Returns a Run object with the following methods:
prompt()
Set the prompt for the run:
r.prompt(
user_prompt: str,
system_prompt: Optional[str] = None
) -> Run
The user’s input or query
System instructions or context
response()
Set the final response:
r.response(text: str) -> Run
The agent’s final response text
status()
Set the status code and optional message:
r.status(
code: int,
message: Optional[str] = None
) -> Run
Status code: 0 = success, 1 = partial success, 2 = error
Human-readable status message
Attach custom metadata:
r.metadata(
data: Optional[Dict[str, str]] = None,
**kwargs: str
) -> Run
Dictionary of metadata key-value pairs
Additional metadata as keyword arguments
step()
Create a child step:
r.step(step_id: Optional[str] = None) -> Step
See step() for full documentation.
Create a child tool call:
r.tool_call(
tool_name: Optional[str] = None,
tool_call_id: Optional[str] = None
) -> ToolCall
See tool_call() for full documentation.
feedback()
Submit user feedback for this run:
r.feedback(
score: Optional[Literal["thumbs_up", "thumbs_down"]] = None,
text: Optional[str] = None
) -> bool
score
Literal['thumbs_up', 'thumbs_down']
Thumbs up or thumbs down rating
Textual feedback (max 2000 characters)
end()
Finalize and send the run data:
You must call r.prompt() before calling r.end(), or a ValueError will be raised.
error()
Mark the run as failed and send immediately:
r.error(status_message: str = "") -> None
Error message describing what went wrong
Properties
run_id
Get the run identifier:
Usage Examples
Basic Usage
from contextcompany import run
r = run()
r.prompt(
user_prompt="Summarize this article",
system_prompt="You are a helpful assistant."
)
r.response("Here is a summary of the article...")
r.end()
Conversational Agent
from contextcompany import run
import uuid
session_id = str(uuid.uuid4())
# First message
r1 = run(session_id=session_id, conversational=True)
r1.prompt(user_prompt="Hello!")
r1.response("Hi! How can I help you?")
r1.end()
# Follow-up message
r2 = run(session_id=session_id, conversational=True)
r2.prompt(user_prompt="What's the weather?")
r2.response("Let me check the weather for you.")
r2.end()
from contextcompany import run
r = run()
r.prompt(user_prompt="Analyze sales data")
r.metadata(
user_id="user_123",
department="sales",
region="us-west"
)
r.response("Sales analysis complete.")
r.end()
from contextcompany import run
r = run()
# Track an LLM call
s = r.step()
s.prompt("What's the weather in SF?")
s.response("I'll check the weather for you.")
s.model(requested="gpt-4", used="gpt-4")
s.tokens(prompt_uncached=20, completion=15)
s.end()
# Track a tool call
tc = r.tool_call(tool_name="get_weather")
tc.args({"location": "San Francisco"})
tc.result({"temperature": 65, "condition": "sunny"})
tc.end()
# Finalize the run
r.prompt(user_prompt="What's the weather in SF?")
r.response("It's 65°F and sunny in San Francisco.")
r.end()
Error Handling
from contextcompany import run
r = run()
r.prompt(user_prompt="Process this request")
try:
# Your agent logic here
result = process_request()
r.response(result)
r.end()
except Exception as e:
r.error(f"Failed to process request: {str(e)}")
Custom Status Codes
from contextcompany import run
r = run()
r.prompt(user_prompt="Generate report")
try:
report = generate_report()
if report.is_complete:
r.status(0, "Success") # Complete success
else:
r.status(1, "Partial success") # Partial success
r.response(report.content)
r.end()
except Exception as e:
r.error(f"Report generation failed: {str(e)}")
Best Practices
-
Always call end(): Ensure
r.end() or r.error() is called to send the run data to Observatory.
-
Set prompt before ending: The prompt is required before calling
r.end().
-
Use session_id for conversations: Group related runs together using the same
session_id.
-
Add meaningful metadata: Attach user IDs, request IDs, or other context to help with debugging and analysis.
-
Handle errors gracefully: Use
r.error() to capture and report failures.
See Also