Get started with LangChain by building a simple chat application that responds to user queries.
Prerequisites
Python 3.10 or higher
An API key from a model provider (OpenAI, Anthropic, etc.)
Installation
Install LangChain
Install LangChain and a model provider: OpenAI
Anthropic
Ollama (local)
pip install langchain langchain-openai
Set up your API key
Set your API key as an environment variable: export OPENAI_API_KEY = "your-api-key-here"
For Ollama, you don’t need an API key. Just make sure Ollama is running locally: ollama serve
Your first chat application
Create a simple chat application that responds to user messages:
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
# Initialize the model
model = ChatOpenAI( model = "gpt-4o-mini" )
# Create messages
messages = [
SystemMessage( content = "You are a helpful assistant." ),
HumanMessage( content = "What is LangChain?" )
]
# Get response
response = model.invoke(messages)
print (response.content)
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage
# Initialize the model
model = ChatAnthropic( model = "claude-3-5-sonnet-20241022" )
# Create messages
messages = [
SystemMessage( content = "You are a helpful assistant." ),
HumanMessage( content = "What is LangChain?" )
]
# Get response
response = model.invoke(messages)
print (response.content)
from langchain_ollama import ChatOllama
from langchain_core.messages import HumanMessage, SystemMessage
# Initialize the model (requires Ollama running locally)
model = ChatOllama( model = "llama3" )
# Create messages
messages = [
SystemMessage( content = "You are a helpful assistant." ),
HumanMessage( content = "What is LangChain?" )
]
# Get response
response = model.invoke(messages)
print (response.content)
Run your application:
Using prompt templates
Make your prompts reusable with templates:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful assistant that translates {input_language} to {output_language} ." ),
( "human" , " {text} " )
])
# Initialize model
model = ChatOpenAI( model = "gpt-4o-mini" )
# Chain prompt and model
chain = prompt | model
# Invoke with inputs
response = chain.invoke({
"input_language" : "English" ,
"output_language" : "French" ,
"text" : "Hello, how are you?"
})
print (response.content)
Chaining components
LangChain uses the | operator to chain components together:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Define components
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful assistant." ),
( "human" , " {input} " )
])
model = ChatOpenAI( model = "gpt-4o-mini" )
output_parser = StrOutputParser()
# Chain them together
chain = prompt | model | output_parser
# Invoke returns a string directly
result = chain.invoke({ "input" : "Tell me a joke about programming" })
print (result)
Streaming responses
Stream responses token-by-token for better user experience:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful assistant." ),
( "human" , " {input} " )
])
model = ChatOpenAI( model = "gpt-4o-mini" )
chain = prompt | model
# Stream the response
for chunk in chain.stream({ "input" : "Write a short poem about coding" }):
print (chunk.content, end = "" , flush = True )
Create an agent that can use tools to answer questions:
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain_core.tools import tool
# Define a custom tool
@tool
def multiply ( a : int , b : int ) -> int :
"""Multiply two numbers together."""
return a * b
@tool
def add ( a : int , b : int ) -> int :
"""Add two numbers together."""
return a + b
# Create agent with tools
agent = create_agent(
model = "openai:gpt-4o-mini" ,
tools = [multiply, add],
)
# Invoke the agent
response = agent.invoke({
"messages" : [{ "role" : "user" , "content" : "What is 25 * 4 + 10?" }]
})
print (response[ "messages" ][ - 1 ][ "content" ])
Agents use tools to answer questions that require computation or external data access.
Retrieval augmented generation (RAG)
Build a RAG system that answers questions based on your documents:
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.documents import Document
from langchain_chroma import Chroma
# Sample documents
docs = [
Document( page_content = "LangChain is a framework for building LLM applications." ),
Document( page_content = "LangChain provides integrations with 100+ model providers." ),
Document( page_content = "Agents can use tools to perform actions and gather information." ),
]
# Create vector store
vectorstore = Chroma.from_documents(
documents = docs,
embedding = OpenAIEmbeddings()
)
# Create retriever
retriever = vectorstore.as_retriever()
# Create RAG chain
prompt = ChatPromptTemplate.from_messages([
( "system" , "Answer based on this context: {context} " ),
( "human" , " {question} " )
])
model = ChatOpenAI( model = "gpt-4o-mini" )
# Simple RAG function
def rag_chain ( question : str ) -> str :
# Retrieve relevant docs
docs = retriever.invoke(question)
context = " \n " .join([doc.page_content for doc in docs])
# Generate answer
chain = prompt | model
response = chain.invoke({ "context" : context, "question" : question})
return response.content
# Ask a question
answer = rag_chain( "What is LangChain?" )
print (answer)
Next steps
Core concepts Learn about the framework architecture and key abstractions
Building agents Build more sophisticated agents with custom tools
Chat models Explore different chat model providers and features
Retrieval (RAG) Build advanced RAG applications
Common patterns
Handle API errors gracefully: from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
model = ChatOpenAI( model = "gpt-4o-mini" )
try :
response = model.invoke([HumanMessage( content = "Hello" )])
print (response.content)
except Exception as e:
print ( f "Error: { e } " )
Use async for concurrent operations: import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
model = ChatOpenAI( model = "gpt-4o-mini" )
async def get_response ( question : str ):
response = await model.ainvoke([HumanMessage( content = question)])
return response.content
# Run multiple queries concurrently
async def main ():
questions = [ "What is AI?" , "What is ML?" , "What is LLM?" ]
responses = await asyncio.gather( * [get_response(q) for q in questions])
for q, r in zip (questions, responses):
print ( f "Q: { q } \n A: { r } \n " )
asyncio.run(main())
Configure model parameters: from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model = "gpt-4o-mini" ,
temperature = 0.7 , # Controls randomness (0-1)
max_tokens = 500 , # Maximum response length
timeout = 30 , # Request timeout in seconds
)