What is Tracing?
Tracing captures the complete execution flow of your AI agent as a hierarchical tree of operations. Each node in the tree represents a “run”—an LLM call, a tool execution, or a custom function you want to observe.
Think of tracing like a detailed flight recorder for your agent. It doesn’t just log what happened; it captures:
Inputs and outputs at every step
Timing and latency for performance analysis
Parent-child relationships showing how operations nest
Metadata and tags for filtering and organization
Errors and exceptions with full context
Setting Up LangSmith Tracing
1. Get Your API Key
First, create a LangSmith account at smith.langchain.com and generate an API key.
Add these to your .env file:
LANGSMITH_API_KEY = "your_api_key_here"
LANGSMITH_TRACING = true
LANGSMITH_PROJECT = "your-project-name"
Use different project names for different environments (e.g., my-agent-dev, my-agent-staging, my-agent-prod) to keep traces organized.
3. Install Required Packages
pip install langsmith openai
Instrumenting Your Agent
Let’s see how to add tracing to a real agent. Here’s the evolution from agent_v0.py (no tracing) to agent_v1.py (with tracing):
Before: No Tracing
import os
from dotenv import load_dotenv
from openai import AsyncOpenAI
load_dotenv()
client = AsyncOpenAI( api_key = os.getenv( "OPENAI_API_KEY" ))
async def chat ( question : str ) -> str :
"""Process a user question and return assistant response."""
messages = [{ "role" : "user" , "content" : question}]
response = await client.chat.completions.create(
model = "gpt-5-nano" ,
messages = messages
)
return response.choices[ 0 ].message.content
After: With LangSmith Tracing
import os
from dotenv import load_dotenv
from openai import AsyncOpenAI
from langsmith import traceable
from langsmith.wrappers import wrap_openai
load_dotenv()
# Wrap the OpenAI client to automatically trace all LLM calls
client = wrap_openai(AsyncOpenAI( api_key = os.getenv( "OPENAI_API_KEY" )))
# Add @traceable decorator to create a trace for the entire chat function
@traceable ( name = "chat" , run_type = "chain" )
async def chat ( question : str ) -> str :
"""Process a user question and return assistant response."""
messages = [{ "role" : "user" , "content" : question}]
response = await client.chat.completions.create(
model = "gpt-5-nano" ,
messages = messages
)
return response.choices[ 0 ].message.content
Two Simple Changes
Wrap your OpenAI client with wrap_openai() to auto-trace LLM calls
Add @traceable decorator to functions you want to trace
That’s it! Now every call to chat() creates a trace in LangSmith showing:
The user’s question
The messages sent to the model
The model’s response
Token usage and latency
Any errors that occurred
When your agent uses tools, you want to trace those too. Here’s how the OfficeFlow agent traces its database queries:
from langsmith import traceable
import sqlite3
@traceable ( name = "query_database" , run_type = "tool" )
def query_database ( query : str , db_path : str ) -> str :
"""Execute SQL query against the inventory database."""
try :
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.execute(query)
results = cursor.fetchall()
conn.close()
return str (results)
except Exception as e:
return f "Error: { str (e) } "
Tracing Knowledge Base Search
@traceable ( name = "search_knowledge_base" , run_type = "tool" )
async def search_knowledge_base ( query : str , top_k : int = 2 ) -> str :
"""Search knowledge base using semantic similarity."""
# Generate embedding for query
response = await client.embeddings.create(
model = "text-embedding-3-small" ,
input = query
)
query_embedding = response.data[ 0 ].embedding
# Calculate similarity and return top results
# ... (similarity calculation code)
return " \n " .join(results)
Now when you look at a trace in LangSmith, you’ll see:
📊 chat (chain) - 2.3s
├── 🤖 ChatOpenAI (llm) - 0.8s
│ ├── Input: "Do you have printer paper?"
│ └── Output: [tool_call: query_database]
├── 🔧 query_database (tool) - 0.1s
│ ├── Input: "SELECT * FROM products WHERE name LIKE '%paper%'"
│ └── Output: [("Premium Copy Paper", 450, 24.99), ...]
└── 🤖 ChatOpenAI (llm) - 1.2s
├── Input: [previous messages + tool result]
└── Output: "Yes, we have several printer paper options..."
Run Types Explained
The run_type parameter categorizes your traces:
llm Direct calls to language models (usually auto-traced by wrap_openai)
chain Sequences of operations, orchestration logic, or main entry points
tool Tool executions like database queries, API calls, or retrieval operations
retriever Specialized retrieval operations like vector database searches
embedding Embedding generation (also auto-traced when using wrapped clients)
agent High-level agent execution (alternative to “chain” for agent entry points)
Use consistent run types across your codebase. This makes filtering and analysis much easier in the LangSmith UI.
Enrich your traces with context:
from langsmith import traceable, uuid7
thread_id = str (uuid7())
@traceable (
name = "Emma" ,
run_type = "chain" ,
metadata = { "thread_id" : thread_id, "version" : "v1" }
)
async def chat ( question : str ) -> str :
# Your agent logic here
pass
You can also add tags programmatically:
from langsmith import traceable
@traceable
async def chat ( question : str ) -> str :
# Tags help filter traces in the UI
tags = [ "production" , "customer-support" ]
# Add customer context if available
if "urgent" in question.lower():
tags.append( "urgent" )
# Your logic here
pass
Here’s a simplified version of the OfficeFlow agent showing complete instrumentation:
from openai import AsyncOpenAI
from langsmith import traceable, uuid7
from langsmith.wrappers import wrap_openai
import json
client = wrap_openai(AsyncOpenAI())
thread_id = str (uuid7())
@traceable ( name = "query_database" , run_type = "tool" )
def query_database ( query : str , db_path : str ) -> str :
"""Execute SQL query against the inventory database."""
# Database logic here
pass
@traceable ( name = "Emma" , metadata = { "thread_id" : thread_id})
async def chat ( question : str ) -> str :
"""Process a user question and return assistant response."""
tools = [ QUERY_DATABASE_TOOL ] # Tool schema definition
messages = [
{ "role" : "system" , "content" : system_prompt},
{ "role" : "user" , "content" : question}
]
# First LLM call - auto-traced by wrap_openai
response = await client.chat.completions.create(
model = "gpt-5-nano" ,
messages = messages,
tools = tools
)
response_message = response.choices[ 0 ].message
# Handle tool calls
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
function_args = json.loads(tool_call.function.arguments)
# This call is traced because of the @traceable decorator
result = query_database(
query = function_args.get( "query" ),
db_path = db_path
)
messages.append({
"role" : "tool" ,
"tool_call_id" : tool_call.id,
"content" : result
})
# Second LLM call with tool results - also auto-traced
response = await client.chat.completions.create(
model = "gpt-5-nano" ,
messages = messages
)
return response.choices[ 0 ].message.content
This creates a beautiful hierarchical trace showing:
The main chat function execution
First LLM call to decide on tool usage
Tool execution (query_database)
Second LLM call with tool results
Final response to the user
Viewing Traces in LangSmith
Once your agent is instrumented:
Run your agent normally
Visit smith.langchain.com
Navigate to your project
Click on any trace to see the detailed execution tree
You can:
Expand/collapse nodes to focus on specific parts
View inputs and outputs at each level
See timing information for performance analysis
Share trace URLs with teammates for debugging
Filter by metadata, tags, or run type
Pro tip : When debugging, add the LangSmith trace URL to your issue tracker. This gives you and your team the exact context needed to investigate problems.
Common Patterns
Pattern 1: Tracing Third-Party Agents
Even simple agents benefit from tracing:
from openai import OpenAI
from langsmith.wrappers import wrap_openai
from langsmith import traceable
client = wrap_openai(OpenAI())
@traceable ( run_type = "tool" )
def weather_retriever ():
"""Retrieve current weather information."""
return "It is sunny today"
@traceable
def agent ( question : str ) -> str :
messages = [{ "role" : "user" , "content" : question}]
response = client.chat.completions.create(
model = "gpt-5-nano" ,
messages = messages,
tools = [ WEATHER_TOOL ]
)
# Handle tool calling logic...
return final_response
Pattern 2: Tracing Conversational Agents
For agents with conversation history:
thread_store: dict[ str , list ] = {}
@traceable ( metadata = { "thread_id" : thread_id})
async def chat ( question : str , thread_id : str ) -> str :
# Fetch conversation history
history = thread_store.get(thread_id, [])
messages = [
{ "role" : "system" , "content" : system_prompt}
] + history + [
{ "role" : "user" , "content" : question}
]
# Process with full context...
The thread_id metadata lets you filter traces by conversation in the UI.
Best Practices
Trace Entry Points Always trace your main agent function—this creates the root node that contains all other operations.
Trace Tools Every tool should be traced so you can see exactly what arguments were passed and what was returned.
Use Descriptive Names Name traces after their purpose: “query_database”, “search_knowledge”, not “function_1”, “helper_2”.
Add Context with Metadata Include user IDs, session IDs, version numbers, or any context that helps you filter and analyze traces later.
Troubleshooting
Traces Not Appearing?
Check these common issues:
Environment variables : Verify LANGSMITH_TRACING=true and LANGSMITH_API_KEY are set
Project name : Ensure LANGSMITH_PROJECT is configured
Network access : LangSmith needs to send traces to the API (check firewalls)
Client wrapping : Make sure you’re using wrap_openai() or the @traceable decorator
Traces Too Verbose?
You can control granularity:
# Trace only the main agent, not every internal helper
@traceable
async def chat ( question : str ) -> str :
# This is traced
result = await process_question(question)
return result
# Don't trace internal helpers unless needed
async def process_question ( question : str ):
# Not traced (no decorator)
pass
Next Steps
Now that you can trace your agents:
Evaluation Strategies Learn how to use traces to systematically evaluate agent performance
Analyzing Traces Discover techniques for debugging and improving agents using trace data