Skip to main content

Overview

LangChain is a comprehensive framework for building LLM-powered applications, while LangGraph extends it with stateful, graph-based agent orchestration. Together, they enable complex reasoning patterns like ReAct agents with tool use and conditional workflows.

When to Use LangChain/LangGraph

  • Graph-Based Workflows: Need conditional branching and complex control flow
  • ReAct Agents: Reasoning and acting agents with iterative tool use
  • Stateful Applications: Maintain conversation state across interactions
  • RAG Applications: Building retrieval-augmented generation systems
  • Multi-Agent Systems: Orchestrating multiple agents with complex interactions

Installation

pip install langchain
pip install langchain-community
pip install langchain-openai  # Or your model provider

Core Concepts

State Management

LangGraph uses typed state to manage data flow through the graph:
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    """State maintains conversation history and step counter."""
    messages: Annotated[list, add_messages]  # Automatically merges messages
    steps: int  # Track iteration count
The add_messages reducer automatically handles message list updates, merging new messages with existing ones.

Nodes

Nodes encapsulate agent logic and modify the state:
def agent_node(state: AgentState) -> AgentState:
    """Node that calls LLM and may use tools."""
    messages = state["messages"]
    steps = state["steps"] + 1
    
    # Call LLM
    response = llm.invoke(messages)
    
    # Update state
    return {
        "messages": [response],
        "steps": steps
    }

Edges & Conditional Routing

Edges define the flow between nodes, with support for conditional logic:
from langgraph.graph import StateGraph

workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)

# Fixed edge: always go to tools after agent
workflow.add_edge("agent", "tools")
workflow.add_edge("tools", END)

Models

LangChain supports multiple model providers:
from langchain_openai import ChatOpenAI
from pydantic import SecretStr
import os

llm = ChatOpenAI(
    model="NousResearch/Hermes-4-70B",
    api_key=SecretStr(os.getenv("NEBIUS_API_KEY")),
    base_url="https://api.tokenfactory.nebius.com/v1",
    temperature=0.7,
)
Source: advance_ai_agents/smart_gtm_agent/app/agents.py:9-25

Common Patterns

Pattern 1: ReAct Agent with LangGraph

ReAct (Reasoning and Acting) agents iteratively think, use tools, and act on observations:
1

Define State

from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]
    steps: int
2

Create Tools

from langchain.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    # API call here
    return f"Weather in {location}: 72°F, Sunny"

tools = [get_weather]
llm_with_tools = llm.bind_tools(tools)
3

Build Graph Nodes

from langgraph.prebuilt import ToolNode

def agent_node(state: AgentState):
    """Reasoning node: LLM decides next action."""
    messages = state["messages"]
    response = llm_with_tools.invoke(messages)
    return {
        "messages": [response],
        "steps": state["steps"] + 1
    }

# ToolNode automatically handles tool execution
tool_node = ToolNode(tools)
4

Assemble Graph

from langgraph.graph import StateGraph, END

def should_continue(state: AgentState):
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return "end"

workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.add_node("tools", tool_node)

workflow.set_entry_point("agent")
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {"tools": "tools", "end": END}
)
workflow.add_edge("tools", "agent")  # Loop back

app = workflow.compile()
5

Run the Agent

initial_state = {
    "messages": [{"role": "user", "content": "What's the weather in NYC?"}],
    "steps": 0
}

result = app.invoke(initial_state)
print(result["messages"][-1].content)
Based on: starter_ai_agents/langchain_langgraph_starter/README.md

Pattern 2: Prebuilt ReAct Agent

For simpler cases, use the prebuilt ReAct agent:
from langgraph.prebuilt import create_react_agent
from langchain_nebius import ChatNebius
from langchain.tools import tool
import os

llm = ChatNebius(
    model="NousResearch/Hermes-4-70B",
    api_key=os.getenv("NEBIUS_API_KEY")
)

@tool
def company_research_tool(url: str) -> str:
    """Research a company from their website URL."""
    # Scraping logic here
    return "Company data..."

# Create agent in one line
agent = create_react_agent(
    model=llm,
    tools=[company_research_tool],
    prompt="You are a professional research assistant."
)

# Use the agent
result = agent.invoke({
    "messages": [{"role": "user", "content": "Research company.com"}]
})
Source: advance_ai_agents/smart_gtm_agent/app/agents.py:343-364

Pattern 3: Multiple Specialized Agents

Create different agents for different tasks:
from langgraph.prebuilt import create_react_agent

# Research agent with web scraping tools
research_agent = create_react_agent(
    model=llm,
    tools=[web_scraper_tool],
    prompt="""
    You are a research assistant.
    Gather comprehensive company insights.
    Include: overview, funding, industry, competitors.
    """
)

# GTM Strategy agent
gtm_agent = create_react_agent(
    model=llm,
    tools=[market_data_tool],
    prompt="""
    You are a Go-To-Market strategist.
    Create actionable GTM playbooks with:
    - Target market analysis
    - ICP definition
    - Pricing strategy
    - Distribution channels
    """
)

# Channel Strategy agent
channel_agent = create_react_agent(
    model=llm,
    tools=[competitor_analysis_tool],
    prompt="""
    You are a distribution channel expert.
    Recommend optimal channels:
    - Digital: SEO, paid ads, marketplaces
    - Partnerships: distributors, affiliates
    - Emerging: communities, co-marketing
    """
)

# Use agents for different queries
research = research_agent.invoke({"messages": [{"role": "user", "content": url}]})
gtm = gtm_agent.invoke({"messages": [{"role": "user", "content": research}]})
channels = channel_agent.invoke({"messages": [{"role": "user", "content": gtm}]})
Based on: advance_ai_agents/smart_gtm_agent/app/agents.py:343-406

Pattern 4: RAG with LangChain + Qdrant

Build retrieval-augmented generation systems:
from langchain_qdrant import QdrantVectorStore
from langchain_community.embeddings import HuggingFaceEmbeddings
from qdrant_client import QdrantClient
import os

# Initialize vector store
qdrant_client = QdrantClient(
    url=os.getenv("QDRANT_URL"),
    api_key=os.getenv("QDRANT_API_KEY")
)

embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-MiniLM-L6-v2"
)

vector_store = QdrantVectorStore(
    client=qdrant_client,
    collection_name="documents",
    embedding=embeddings
)

# Create retrieval tool
from langchain.tools import tool

@tool
def search_documents(query: str) -> str:
    """Search the knowledge base for relevant information."""
    docs = vector_store.similarity_search(query, k=3)
    return "\n\n".join([doc.page_content for doc in docs])

# Use with ReAct agent
agent = create_react_agent(
    model=llm,
    tools=[search_documents],
    prompt="Use the knowledge base to answer questions accurately."
)

Real Examples from Repository

LangGraph Starter

ReAct agent implementation from scratch with LangGraph

Agentic RAG

RAG system with CrewAI agents, Qdrant, and web search

Job Search Agent

Memory-enabled job search with LangChain workflow

Advanced Examples

More advanced LangChain examples

Configuration

Environment Variables

# .env file
NEBIUS_API_KEY=your_nebius_api_key
OPENAI_API_KEY=your_openai_api_key  # If using OpenAI

# For RAG applications
QDRANT_URL=your_qdrant_url
QDRANT_API_KEY=your_qdrant_api_key

State Configuration

messages
Annotated[list, add_messages]
Conversation history with automatic merging
steps
int
Iteration counter for tracking agent steps
custom_fields
any
Add any custom state fields your workflow needs

Best Practices

Always use the add_messages reducer for message lists:
from typing import Annotated
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]  # ✓ Correct
    # messages: list  # ✗ Don't do this
This automatically handles message merging and updates.
LLMs read tool docstrings to understand when to use them:
@tool
def search_database(query: str, limit: int = 10) -> list:
    """
    Search the product database.
    
    Args:
        query: Search keywords
        limit: Max results to return (default 10)
    
    Returns:
        List of matching products
    """
    ...
Prevent infinite loops with max iterations:
def should_continue(state: AgentState):
    if state["steps"] > 10:  # Max 10 iterations
        return "end"
    if state["messages"][-1].tool_calls:
        return "tools"
    return "end"
Start with create_react_agent for standard workflows:
# Prebuilt is simpler
agent = create_react_agent(model=llm, tools=tools)

# Only build custom graphs when you need:
# - Complex conditional routing
# - Custom state management
# - Multiple interconnected agents

Troubleshooting

If agent keeps calling tools repeatedly:
  • Add step counter to state
  • Check should_continue logic
  • Add max iteration limit
  • Verify tool outputs are useful
def should_continue(state):
    if state["steps"] > MAX_STEPS:
        return "end"
    ...
Ensure nodes return state updates:
def node(state):
    # ✓ Correct: return dict with updates
    return {"messages": [new_message], "steps": state["steps"] + 1}
    
    # ✗ Wrong: modifying state directly
    state["messages"].append(new_message)  # Don't do this
  • Verify tools are bound: llm_with_tools = llm.bind_tools(tools)
  • Check tool docstrings are clear
  • Use ToolNode for automatic execution
  • Review conditional edge logic
LangChain has many optional dependencies:
# Install what you need
pip install langchain-openai  # For OpenAI/compatible APIs
pip install langchain-community  # For HuggingFace, etc.
pip install langchain-qdrant  # For Qdrant vector store

Next Steps

Build ReAct Agents

Start with the LangGraph starter example to understand reasoning and acting patterns

Add RAG Capabilities

Integrate vector stores like Qdrant for knowledge retrieval

Multi-Agent Orchestration

Create specialized agents for research, analysis, and synthesis

Custom Workflows

Build complex graphs with conditional routing and custom state

Build docs developers (and LLMs) love