The @traceable decorator automatically creates traces for function calls, capturing inputs, outputs, errors, and timing information. It works with both sync and async functions.
The trace function is an alias for traceable and can be used interchangeably.
Basic usage
from langsmith import traceable
@traceable
def my_function(input_text: str) -> str:
# Your logic here
return f"Processed: {input_text}"
result = my_function("hello")
# Automatically traced to LangSmith
Parameters
Custom name for the traced operation. Defaults to the function name.@traceable(name="custom-operation")
def my_function():
pass
Type of run to create. Options: "llm", "chain", "tool", "retriever", "prompt". Default is "chain".@traceable(run_type="tool")
def search_database(query: str):
return results
LangSmith project to log traces to. Overrides the default project.@traceable(project_name="my-project")
def my_function():
pass
Tags to attach to the trace for filtering and organization.@traceable(tags=["production", "v2"])
def my_function():
pass
Additional metadata to attach to the trace.@traceable(metadata={"version": "1.0", "team": "ml"})
def my_function():
pass
Custom LangSmith client to use for tracing.from langsmith import Client, traceable
custom_client = Client(api_key="...")
@traceable(client=custom_client)
def my_function():
pass
reduce_fn
Callable[[Sequence], dict | str] | None
Function to reduce/aggregate outputs from generators or iterators.@traceable(reduce_fn=lambda outputs: {"combined": "".join(outputs)})
def generate_text():
for chunk in chunks:
yield chunk
process_inputs
Callable[[dict], dict] | None
Function to process/filter inputs before tracing. Useful for hiding sensitive data.def hide_sensitive(inputs):
return {k: v for k, v in inputs.items() if k != "api_key"}
@traceable(process_inputs=hide_sensitive)
def my_function(data: str, api_key: str):
pass
process_outputs
Callable[..., dict] | None
Function to process/filter outputs before tracing.@traceable(process_outputs=lambda x: {"result": x[:100]})
def my_function():
return "very long output..."
Function to process individual chunks from streaming responses.
Async functions
The decorator works seamlessly with async functions:
@traceable
async def async_function(input_text: str) -> str:
await asyncio.sleep(1)
return f"Processed: {input_text}"
result = await async_function("hello")
Nested tracing
Traces are automatically nested when calling traced functions from other traced functions:
@traceable(name="helper")
def helper_function(x: int) -> int:
return x * 2
@traceable(name="main")
def main_function(x: int) -> int:
# This creates a nested trace under main_function
result = helper_function(x)
return result + 1
main_function(5) # Creates a parent trace with a child trace
Generators and streaming
The decorator supports generator functions for streaming scenarios:
@traceable
def generate_tokens(text: str):
for token in text.split():
yield token
# All yielded values are collected and traced
for token in generate_tokens("hello world"):
print(token)
Use reduce_fn to aggregate streaming outputs:
@traceable(reduce_fn=lambda chunks: {"full_text": "".join(chunks)})
def stream_response():
for chunk in ["Hello", " ", "world"]:
yield chunk
Override tracing parameters at runtime using the langsmith_extra parameter:
@traceable
def my_function(input_text: str, langsmith_extra: dict | None = None) -> str:
return f"Processed: {input_text}"
# Override name and tags at runtime
my_function(
"hello",
langsmith_extra={
"name": "custom-name",
"tags": ["special"],
"metadata": {"user_id": "123"}
}
)
Available fields in langsmith_extra:
Override the project name.
Override or add metadata.
Conditional tracing
Control whether tracing is enabled using environment variables or the tracing_context manager:
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false"
@traceable
def my_function():
pass
my_function() # Not traced
Or use tracing_context:
from langsmith import traceable, tracing_context
@traceable
def my_function():
pass
with tracing_context(enabled=False):
my_function() # Not traced
Error handling
Errors are automatically captured and included in the trace:
@traceable
def failing_function():
raise ValueError("Something went wrong")
try:
failing_function()
except ValueError:
pass # Error is logged in the trace
Best practices
-
Use descriptive names: Help identify operations in the UI
@traceable(name="fetch-user-data")
def get_user(user_id: str):
pass
-
Set appropriate run types: Makes filtering easier
@traceable(run_type="retriever")
def search_docs(query: str):
pass
-
Redact sensitive data: Use
process_inputs and process_outputs
@traceable(process_inputs=lambda x: {"query": x.get("query")})
def query_with_key(query: str, api_key: str):
pass
-
Add context with metadata: Include version, environment info
@traceable(metadata={"version": "2.0", "env": "prod"})
def my_function():
pass