Skip to main content
LangSmith helps you debug, evaluate, and monitor your language models and intelligent agents. Works with any LLM application, including native integrations with LangChain.

What is LangSmith SDK?

LangSmith SDK is a client library that connects your LLM applications to the LangSmith platform for observability, evaluation, and monitoring. Available for both Python and TypeScript, the SDK provides a lightweight way to trace your application’s execution, evaluate model performance, and manage datasets. Whether you’re building with LangChain, using OpenAI directly, or working with any other LLM framework, LangSmith SDK helps you understand what’s happening inside your application.

Key features

Tracing and observability

Automatically capture detailed traces of your LLM calls, chains, and agents with support for nested runs and streaming.

Evaluation framework

Evaluate model performance with custom evaluators, built-in metrics, and dataset-based testing.

Dataset management

Create, version, and manage test datasets from production runs or custom examples.

Native SDK wrappers

Drop-in wrappers for OpenAI, Anthropic, and Gemini SDKs that add tracing with zero code changes.

Test integrations

Integrate with Jest, Vitest, and Pytest to trace and evaluate your test suites.

Prompt caching

Cache and optimize prompt executions for faster iterations and reduced costs.

Data privacy

Anonymize sensitive data in traces with built-in PII detection and redaction.

OpenTelemetry support

Export traces using OpenTelemetry for integration with existing observability stacks.

How it works

LangSmith SDK captures execution traces by wrapping your code with decorators (Python) or wrapper functions (TypeScript). Each trace represents a “run” with inputs, outputs, timing, and metadata.
from langsmith import traceable

@traceable
def my_llm_function(user_input: str) -> str:
    # Your LLM call here
    return response
import { traceable } from "langsmith/traceable";

const myLlmFunction = traceable(
  async (userInput: string) => {
    // Your LLM call here
    return response;
  },
  { name: "my_llm_function" }
);
Traces are automatically sent to LangSmith where you can view them in the web UI, analyze performance, and create datasets for evaluation.

Use cases

See exactly what’s happening in your application with detailed traces showing inputs, outputs, latency, and token usage for every LLM call.
Run evaluations on datasets to measure accuracy, hallucination rates, and custom metrics across different prompts and models.
Track usage, latency, and errors in production with automatic tracing and real-time monitoring.
Compare different prompt variations side-by-side and measure their impact on output quality.
Create regression tests from production examples and integrate with your existing test framework.

Quick example

Here’s how simple it is to start tracing your OpenAI calls:
import openai
from langsmith.wrappers import wrap_openai

# Wrap your OpenAI client
client = wrap_openai(openai.Client())

# Use it exactly as before - tracing happens automatically
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello, world!"}]
)
That’s it! Your traces are now visible in the LangSmith web UI.

Get started

Installation

Install the SDK for Python or TypeScript

Quickstart

Get tracing in 5 minutes

Core concepts

Learn about tracing, evaluation, and datasets

API reference

Explore the complete API

Community and support

LangSmith SDK is developed and maintained by LangChain, the company behind the LangChain framework.

Build docs developers (and LLMs) love