Here’s a basic example of tracking an AI agent run:
from contextcompany import run# Create a new runr = run()# Set prompt and responser.prompt(user_prompt="What is the weather in San Francisco?")r.response("The weather in San Francisco is 65°F and sunny.")# End the runr.end()
A step represents an individual LLM call within a run. Steps track prompts, responses, token usage, and costs.
from contextcompany import runr = run()s = r.step()s.prompt("Analyze this data...")s.response("Based on the analysis...")s.tokens(prompt_uncached=100, completion=50)s.end()r.prompt(user_prompt="User query")r.response("Final response")r.end()