Skip to main content

Overview

Output parsers transform raw LLM text into structured data. They support both complete outputs and streaming partial outputs.

BaseOutputParser

Base class for parsing LLM outputs. Source: langchain_core.output_parsers.base:136 Inherits: BaseLLMOutputParser, RunnableSerializable[LanguageModelOutput, T]

Type Parameters

T
TypeVar
The output type after parsing

Properties

InputType
type
Input type for the parser (str | BaseMessage)
OutputType
type[T]
Output type after parsing (inferred from generic)

Core Methods

parse

def parse(self, text: str) -> T
Parse a string output from an LLM.
text
str
required
Text output from the LLM
return
T
Parsed output

parse_result

def parse_result(
    self,
    result: list[Generation],
    *,
    partial: bool = False
) -> T
Parse a list of LLM generation candidates.
result
list[Generation]
required
List of generation candidates from the LLM
partial
bool
default:"False"
Whether to parse partial results (useful for streaming)
return
T
Parsed output

aparse

async def aparse(self, text: str) -> T
Async version of parse.

aparse_result

async def aparse_result(
    self,
    result: list[Generation],
    *,
    partial: bool = False
) -> T
Async version of parse_result.

invoke

def invoke(
    self,
    input: str | BaseMessage,
    config: RunnableConfig | None = None,
    **kwargs: Any
) -> T
Parse output via the Runnable interface.

get_format_instructions

def get_format_instructions(self) -> str
Get instructions for how the LLM should format its output.
return
str
Format instructions to include in the prompt

JsonOutputParser

Parse LLM output as JSON. Source: langchain_core.output_parsers.json:31 Inherits: BaseCumulativeTransformOutputParser[Any]

Properties

pydantic_object
type[BaseModel] | None
default:"None"
Optional Pydantic model for validation

Methods

parse_result

def parse_result(
    self,
    result: list[Generation],
    *,
    partial: bool = False
) -> Any
Parse result as JSON.
partial
bool
default:"False"
If True, parse partial JSON (useful for streaming)
return
Any
Parsed JSON object (dict or list)

parse

def parse(self, text: str) -> Any
Parse text as JSON.

Example

from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field

class Joke(BaseModel):
    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline")

parser = JsonOutputParser(pydantic_object=Joke)

prompt = PromptTemplate(
    template="Tell me a joke.\n{format_instructions}",
    input_variables=[],
    partial_variables={"format_instructions": parser.get_format_instructions()}
)

chain = prompt | model | parser
result = chain.invoke({})  # {"setup": "...", "punchline": "..."}

PydanticOutputParser

Parse LLM output into a Pydantic model. Source: langchain_core.output_parsers.pydantic Inherits: BaseOutputParser[TBaseModel]

Type Parameters

TBaseModel
TypeVar
Pydantic model type to parse into

Properties

pydantic_object
type[TBaseModel]
required
Pydantic model class to parse into

Constructor

def __init__(self, pydantic_object: type[TBaseModel])
pydantic_object
type[TBaseModel]
required
Pydantic model class

Methods

parse

def parse(self, text: str) -> TBaseModel
Parse text into a Pydantic model instance.
return
TBaseModel
Validated Pydantic model instance

Example

from langchain_core.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field

class Person(BaseModel):
    name: str = Field(description="Person's name")
    age: int = Field(description="Person's age")

parser = PydanticOutputParser(pydantic_object=Person)

chain = prompt | model | parser
result = chain.invoke({...})  # Person(name="...", age=...)

StrOutputParser

Simple parser that returns the text output as a string. Source: langchain_core.output_parsers.string Inherits: BaseOutputParser[str]

Methods

parse

def parse(self, text: str) -> str
Return the text unchanged.

Example

from langchain_core.output_parsers import StrOutputParser

chain = prompt | model | StrOutputParser()
result = chain.invoke({...})  # "string output"

CommaSeparatedListOutputParser

Parse output into a comma-separated list. Source: langchain_core.output_parsers.list Inherits: BaseOutputParser[list[str]]

Methods

parse

def parse(self, text: str) -> list[str]
Split text by commas and return list of stripped strings.

Example

from langchain_core.output_parsers import CommaSeparatedListOutputParser

parser = CommaSeparatedListOutputParser()
parser.parse("apple, banana, orange")  # ["apple", "banana", "orange"]

XMLOutputParser

Parse XML output into a dictionary. Source: langchain_core.output_parsers.xml Inherits: BaseOutputParser[dict]

Properties

tags
list[str] | None
default:"None"
Optional list of tags to parse. If None, parses all tags.

Methods

parse

def parse(self, text: str) -> dict
Parse XML text into a nested dictionary.

Example

from langchain_core.output_parsers import XMLOutputParser

parser = XMLOutputParser()
parser.parse(
    "<person><name>John</name><age>30</age></person>"
)
# {"person": {"name": "John", "age": "30"}}

OpenAIToolsOutputParser

Parse tool calls from OpenAI format. Source: langchain_core.output_parsers.openai_tools Inherits: BaseGenerationOutputParser

Methods

parse_result

def parse_result(
    self,
    result: list[Generation],
    *,
    partial: bool = False
) -> list[dict[str, Any]]
Extract tool calls from generation result.
return
list[dict[str, Any]]
List of tool call dictionaries

JsonOutputKeyToolsParser

Extract specific tools from OpenAI tool calls and parse as JSON. Source: langchain_core.output_parsers.openai_tools Inherits: BaseGenerationOutputParser

Properties

key_name
str
required
Name of the tool to extract
first_tool_only
bool
default:"False"
Whether to return only the first matching tool call

Example

from langchain_core.output_parsers.openai_tools import JsonOutputKeyToolsParser

parser = JsonOutputKeyToolsParser(key_name="search", first_tool_only=True)

chain = model.bind_tools([search_tool]) | parser
result = chain.invoke("search for cats")  # {"query": "cats"}

PydanticToolsParser

Parse tool calls into Pydantic models. Source: langchain_core.output_parsers.openai_tools Inherits: BaseGenerationOutputParser

Properties

tools
list[type[BaseModel]]
required
List of Pydantic model classes corresponding to tools
first_tool_only
bool
default:"False"
Whether to return only the first tool call

Example

from langchain_core.output_parsers.openai_tools import PydanticToolsParser
from pydantic import BaseModel, Field

class SearchArgs(BaseModel):
    query: str = Field(description="Search query")

parser = PydanticToolsParser(tools=[SearchArgs], first_tool_only=True)

chain = model.bind_tools([SearchArgs]) | parser
result = chain.invoke("search for cats")  # SearchArgs(query="cats")

BaseCumulativeTransformOutputParser

Base class for parsers that support streaming with partial outputs. Source: langchain_core.output_parsers.transform Inherits: BaseOutputParser[T]

Methods

transform

def transform(
    self,
    input: Iterator[str | BaseMessage],
    config: RunnableConfig | None = None,
    **kwargs: Any
) -> Iterator[T]
Transform an iterator of inputs into an iterator of outputs.

atransform

async def atransform(
    self,
    input: AsyncIterator[str | BaseMessage],
    config: RunnableConfig | None = None,
    **kwargs: Any
) -> AsyncIterator[T]
Async version of transform.

OutputParserException

Exception raised when output parsing fails. Source: langchain_core.exceptions Inherits: ValueError

Properties

llm_output
str | None
The output from the LLM that failed to parse
observation
str | None
Optional observation about why parsing failed
send_to_llm
bool
Whether to send the error back to the LLM for correction

Build docs developers (and LLMs) love