Skip to main content
This integration connects Perplexity’s search-augmented models to LangChain.

Installation

pip install -U langchain-perplexity

Setup

Set your Perplexity API key as an environment variable:
export PPLX_API_KEY="your-api-key"
Get your API key from perplexity.ai.

Usage

from langchain_perplexity import ChatPerplexity

model = ChatPerplexity(
    model="sonar",
    temperature=0.7,
    max_tokens=None,
)

messages = [
    ("system", "You are a helpful assistant."),
    ("human", "What are the latest developments in AI?"),
]

response = model.invoke(messages)
print(response.content)

Streaming

for chunk in model.stream(messages):
    print(chunk.content, end="")

API Reference

ChatPerplexity

model
str
default:"sonar"
Model name to use (e.g., sonar, sonar-pro, sonar-reasoning).
temperature
float
default:"0.7"
Sampling temperature. Controls randomness in generation.
max_tokens
int | None
default:"None"
Maximum number of tokens to generate.
timeout
float | tuple[float, float] | None
default:"None"
Timeout for requests to Perplexity completion API.
max_retries
int
default:"6"
Maximum number of retries for failed requests.
api_key
SecretStr | None
default:"None"
Perplexity API key. Automatically read from PPLX_API_KEY environment variable if not provided.
streaming
bool
default:"False"
Whether to stream the results or not.
search_mode
'academic' | 'sec' | 'web' | None
default:"None"
Search mode for specialized content:
  • academic: Search academic papers and research
  • sec: Search SEC filings
  • web: General web search
reasoning_effort
'low' | 'medium' | 'high' | None
default:"None"
Reasoning effort level: low, medium, or high (default).
language_preference
str | None
default:"None"
Preferred language for responses.
search_domain_filter
list[str] | None
default:"None"
List of domains to filter search results (max 20 domains).
model_kwargs
dict
default:"{}"
Additional model parameters valid for create call not explicitly specified.

Supported Models

  • Sonar: Fast, search-augmented model for general queries
  • Sonar Pro: More capable model with enhanced search
  • Sonar Reasoning: Model with advanced reasoning capabilities
See docs.perplexity.ai/docs/model-cards for the latest models.

Features

  • Search-augmented generation (retrieves real-time information)
  • Specialized search modes (academic, SEC filings, web)
  • Citations and sources
  • Reasoning capabilities
  • Streaming
  • Async support
  • Domain filtering
Perplexity models automatically search the web to provide up-to-date, cited answers. This makes them ideal for queries requiring current information or factual verification.

Search Mode Example

Use specialized search modes for domain-specific queries:
# Search academic papers
model = ChatPerplexity(
    model="sonar-pro",
    search_mode="academic"
)

response = model.invoke([
    ("human", "What are recent breakthroughs in quantum computing?")
])

# Response will include citations from academic sources
print(response.content)

Domain Filtering

Filter search results to specific domains:
model = ChatPerplexity(
    model="sonar",
    search_domain_filter=["arxiv.org", "nature.com", "science.org"]
)

response = model.invoke([
    ("human", "Explain the latest findings on dark matter")
])

Build docs developers (and LLMs) love