Overview
Messages are the inputs and outputs of chat models. LangChain provides several message types representing different roles in a conversation.
BaseMessage
Base abstract class for all messages.
Source: langchain_core.messages.base:93
Inherits: Serializable
Properties
content
str | list[str | dict]
required
The contents of the message. Can be:
- A string for simple text
- A list of content blocks (text, images, audio, etc.)
The type of message (e.g., "human", "ai", "system"). Used for deserialization.
Additional payload data. Used for provider-specific information like tool calls.
Response metadata such as headers, token counts, model name.
Optional name for the message.
Optional unique identifier for the message.
Computed Properties
Normalized list of content blocks. Lazily parses content into standard format.
Text content extracted from all text blocks in the message.
Constructor
def __init__(
self,
content: str | list[str | dict] | None = None,
content_blocks: list[ContentBlock] | None = None,
**kwargs: Any
)
content
str | list[str | dict] | None
Message content
content_blocks
list[ContentBlock] | None
Typed standard content blocks (alternative to content)
Additional fields like name, id, additional_kwargs
HumanMessage
Message from a human user.
Source: langchain_core.messages.human
Inherits: BaseMessage
Properties
type
Literal['human']
default:"'human'"
Message type identifier
Example
from langchain_core.messages import HumanMessage
msg = HumanMessage(content="Hello, AI!")
# or with content blocks
msg = HumanMessage(
content=[
{"type": "text", "text": "What's in this image?"},
{"type": "image", "source": {"type": "url", "url": "https://..."}}
]
)
AIMessage
Message from an AI assistant.
Source: langchain_core.messages.ai
Inherits: BaseMessage
Properties
type
Literal['ai']
default:"'ai'"
Message type identifier
tool_calls
list[ToolCall]
default:"[]"
Tool calls made by the AI
invalid_tool_calls
list[InvalidToolCall]
default:"[]"
Tool calls that failed to parse
usage_metadata
UsageMetadata | None
default:"None"
Token usage information
Example
from langchain_core.messages import AIMessage
msg = AIMessage(content="Hello! How can I help you?")
# With tool calls
msg = AIMessage(
content="",
tool_calls=[
{
"name": "search",
"args": {"query": "weather"},
"id": "call_123"
}
]
)
SystemMessage
System message providing instructions or context.
Source: langchain_core.messages.system
Inherits: BaseMessage
Properties
type
Literal['system']
default:"'system'"
Message type identifier
Example
from langchain_core.messages import SystemMessage
msg = SystemMessage(content="You are a helpful assistant.")
ChatMessage
Generic message with arbitrary role.
Source: langchain_core.messages.chat
Inherits: BaseMessage
Properties
type
Literal['chat']
default:"'chat'"
Message type identifier
The role of the message sender
Example
from langchain_core.messages import ChatMessage
msg = ChatMessage(content="Custom message", role="moderator")
FunctionMessage
Message representing a function response (deprecated in favor of ToolMessage).
Source: langchain_core.messages.function
Inherits: BaseMessage
Properties
type
Literal['function']
default:"'function'"
Message type identifier
Name of the function that was called
Message representing the result of a tool call.
Source: langchain_core.messages.tool
Inherits: BaseMessage
Properties
type
Literal['tool']
default:"'tool'"
Message type identifier
ID of the tool call this message is responding to
status
Literal['success', 'error'] | None
default:"'success'"
Status of the tool execution
Example
from langchain_core.messages import ToolMessage
msg = ToolMessage(
content="Search results: ...",
tool_call_id="call_123"
)
Message Chunks
Chunk versions of messages used for streaming.
AIMessageChunk
Source: langchain_core.messages.ai
Streaming chunk of an AI message.
from langchain_core.messages import AIMessageChunk
chunk = AIMessageChunk(content="Hello")
chunk2 = AIMessageChunk(content=" world")
combined = chunk + chunk2 # AIMessageChunk(content="Hello world")
HumanMessageChunk
Source: langchain_core.messages.human
Streaming chunk of a human message.
SystemMessageChunk
Source: langchain_core.messages.system
Streaming chunk of a system message.
Content Blocks
Standard content block types for multimodal messages.
TextContentBlock
class TextContentBlock(TypedDict):
type: Literal["text"]
text: str
Plain text content.
ImageContentBlock
class ImageContentBlock(TypedDict, total=False):
type: Literal["image"]
source: dict # url, base64, or file_id
detail: str # "auto", "low", "high"
Image content with various source types.
AudioContentBlock
class AudioContentBlock(TypedDict):
type: Literal["audio"]
source: dict # base64 or url
Audio content.
VideoContentBlock
class VideoContentBlock(TypedDict):
type: Literal["video"]
source: dict # url or base64
Video content.
FileContentBlock
class FileContentBlock(TypedDict):
type: Literal["file"]
source: dict # url, base64, or file_id
Generic file content.
class ToolCall(TypedDict):
name: str
args: dict[str, Any]
id: str | None
type: Literal["tool_call"] # optional
Represents a tool call made by the AI.
Arguments to pass to the tool
Unique identifier for this tool call
class InvalidToolCall(TypedDict):
name: str | None
args: str | None
id: str | None
error: str | None
type: Literal["invalid_tool_call"] # optional
Represents a tool call that failed to parse.
class UsageMetadata(TypedDict):
input_tokens: int
output_tokens: int
total_tokens: int
input_token_details: InputTokenDetails # optional
output_token_details: OutputTokenDetails # optional
Token usage information for a message.
Total token count (input + output)
class InputTokenDetails(TypedDict, total=False):
audio: int
cache_creation: int
cache_read: int
Breakdown of input token counts.
OutputTokenDetails
class OutputTokenDetails(TypedDict, total=False):
audio: int
reasoning: int
Breakdown of output token counts.
Utility Functions
convert_to_messages
def convert_to_messages(
messages: Sequence[MessageLikeRepresentation]
) -> list[BaseMessage]
Convert message-like objects to BaseMessage instances.
Messages to convert. Can be:
BaseMessage instances
- Tuples like
("human", "hello")
- Dicts with
role and content keys
message_chunk_to_message
def message_chunk_to_message(chunk: BaseMessageChunk) -> BaseMessage
Convert a message chunk to a complete message.
trim_messages
def trim_messages(
messages: Sequence[BaseMessage],
*,
max_tokens: int | None = None,
token_counter: Callable[[list[BaseMessage]], int] | None = None,
strategy: Literal["first", "last"] = "last",
allow_partial: bool = False
) -> list[BaseMessage]
Trim messages to fit within token limits.
filter_messages
def filter_messages(
messages: Sequence[BaseMessage],
*,
include_types: Sequence[type[BaseMessage] | str] | None = None,
exclude_types: Sequence[type[BaseMessage] | str] | None = None,
include_names: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
include_ids: Sequence[str] | None = None,
exclude_ids: Sequence[str] | None = None
) -> list[BaseMessage]
Filter messages by type, name, or ID.
merge_message_runs
def merge_message_runs(
messages: Sequence[BaseMessage],
chunk_separator: str = "\n\n"
) -> list[BaseMessage]
Merge consecutive messages of the same type.