Skip to main content
The Client class is the primary interface for interacting with the LangSmith API. Use it to customize API keys, workspace connections, SSL certificates, and manage LangSmith resources like runs, datasets, examples, and feedback.

Constructor

from langsmith import Client

client = Client(
    api_url="https://api.smith.langchain.com",
    api_key="your-api-key"
)
api_url
str | None
URL for the LangSmith API. Defaults to the LANGCHAIN_ENDPOINT or LANGSMITH_ENDPOINT environment variable, or https://api.smith.langchain.com if not set.
api_key
str | None
API key for authentication. Defaults to the LANGCHAIN_API_KEY or LANGSMITH_API_KEY environment variable.
timeout_ms
int | tuple[int, int, int, int] | None
Timeout for requests in milliseconds. Can be a single value or a tuple of (connect, read, write, pool) timeouts.
web_url
str | None
URL for the LangSmith web application. Used for generating links to traces and datasets.
session
requests.Session | None
Custom requests session to use for HTTP calls.
auto_batch_tracing
bool
Whether to automatically batch tracing data. Default is True.
hide_inputs
Callable[[dict], dict] | None
Function to hide or redact sensitive input data before sending to LangSmith.
hide_outputs
Callable[[dict], dict] | None
Function to hide or redact sensitive output data before sending to LangSmith.
info
ls_schemas.LangSmithInfo | None
Additional information about the LangSmith instance.

Core methods

create_run

Create a new run (trace span) in LangSmith.
client.create_run(
    name="my-chain",
    inputs={"query": "What is LangSmith?"},
    run_type="chain",
    project_name="my-project"
)
name
str
required
Name of the run.
inputs
dict[str, Any]
required
Input data for the run.
run_type
str
required
Type of run: "llm", "chain", "tool", "retriever", or "prompt".
project_name
str | None
Project to log the run to. Defaults to the LANGCHAIN_PROJECT environment variable.
run_id
UUID | str | None
Unique identifier for the run. Auto-generated if not provided.
parent_run_id
UUID | str | None
ID of the parent run for nested traces.
start_time
datetime | None
Start time of the run. Defaults to current time.
tags
list[str] | None
Tags to attach to the run for filtering and organization.
metadata
dict[str, Any] | None
Additional metadata to attach to the run.
return
None
This method queues the run for background upload and returns immediately.

update_run

Update an existing run with outputs and end time.
client.update_run(
    run_id=run_id,
    outputs={"answer": "LangSmith is a platform for..."},
    end_time=datetime.now()
)
run_id
UUID | str
required
ID of the run to update.
outputs
dict[str, Any] | None
Output data from the run.
error
str | None
Error message if the run failed.
end_time
datetime | None
End time of the run.

create_feedback

Create feedback (metric/score) for a run.
client.create_feedback(
    run_id=run_id,
    key="accuracy",
    score=0.95,
    comment="Excellent response"
)
run_id
UUID | str
required
ID of the run to attach feedback to.
key
str
required
Name of the metric or feedback type.
score
float | int | bool | None
Numeric score for the feedback.
value
str | dict | None
Non-numeric value for the feedback.
comment
str | None
Explanation or context for the feedback.
correction
dict | None
Suggested correction if the output was incorrect.

Dataset methods

create_dataset

Create a new dataset for evaluation.
dataset = client.create_dataset(
    dataset_name="my-dataset",
    description="QA pairs for evaluation"
)
dataset_name
str
required
Name of the dataset.
description
str | None
Description of the dataset’s purpose.
data_type
DataType | None
Type of data: "kv", "llm", or "chat".
return
Dataset
The created dataset object.

create_example

Add an example (record) to a dataset.
example = client.create_example(
    inputs={"question": "What is LangSmith?"},
    outputs={"answer": "A platform for LLM development"},
    dataset_id=dataset.id
)
inputs
dict[str, Any]
required
Input data for the example.
outputs
dict[str, Any] | None
Expected output data (ground truth).
dataset_id
UUID | str
required
ID of the dataset to add the example to.
metadata
dict[str, Any] | None
Additional metadata for the example.
return
Example
The created example object.

list_datasets

List all datasets in your workspace.
for dataset in client.list_datasets():
    print(dataset.name, dataset.example_count)
return
Iterator[Dataset]
Iterator of dataset objects.

Prompt methods

pull_prompt

Pull a prompt from the LangSmith prompt hub.
prompt = client.pull_prompt("my-prompt")
# or with version
prompt = client.pull_prompt("username/my-prompt:abc123")
prompt_identifier
str
required
Prompt identifier in format "name", "owner/name", "name:hash", or "owner/name:hash".
include_model
bool | None
Whether to include model configuration in the prompt.
return
Any
The prompt object (typically a LangChain prompt template).

push_prompt

Push a prompt to the LangSmith prompt hub.
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{question}")
])

client.push_prompt("my-prompt", object=prompt)
prompt_identifier
str
required
Name or identifier for the prompt.
object
Any
required
The prompt object to push (typically a LangChain prompt).
is_public
bool
Whether to make the prompt publicly accessible. Default is False.
return
str
URL of the pushed prompt.

Evaluation methods

evaluate

Evaluate a target system on a dataset.
from langsmith import evaluate

def my_function(inputs):
    return {"output": process(inputs["input"])}

results = client.evaluate(
    my_function,
    data="my-dataset",
    evaluators=[accuracy_evaluator],
    experiment_prefix="my-experiment"
)
See the evaluate() documentation for full details.

Context manager

The client can be used as a context manager to ensure proper cleanup:
with Client() as client:
    client.create_run(...)
# Background threads are stopped and pending runs are flushed

Build docs developers (and LLMs) love