LangChain is a comprehensive framework for building LLM-powered applications, while LangGraph extends it with stateful, graph-based agent orchestration. Together, they enable complex reasoning patterns like ReAct agents with tool use and conditional workflows.
Edges define the flow between nodes, with support for conditional logic:
Fixed Edges
Conditional Edges
from langgraph.graph import StateGraphworkflow = StateGraph(AgentState)# Add nodesworkflow.add_node("agent", agent_node)workflow.add_node("tools", tool_node)# Fixed edge: always go to tools after agentworkflow.add_edge("agent", "tools")workflow.add_edge("tools", END)
def should_continue(state: AgentState) -> str: """Decide next node based on state.""" messages = state["messages"] last_message = messages[-1] # If LLM called tools, route to tools node if last_message.tool_calls: return "tools" # Otherwise, end return "end"workflow.add_conditional_edges( "agent", should_continue, { "tools": "tools", "end": END })
ReAct (Reasoning and Acting) agents iteratively think, use tools, and act on observations:
1
Define State
from typing import TypedDict, Annotatedfrom langgraph.graph.message import add_messagesclass AgentState(TypedDict): messages: Annotated[list, add_messages] steps: int
2
Create Tools
from langchain.tools import tool@tooldef get_weather(location: str) -> str: """Get current weather for a location.""" # API call here return f"Weather in {location}: 72°F, Sunny"tools = [get_weather]llm_with_tools = llm.bind_tools(tools)
from langgraph.prebuilt import create_react_agentfrom langchain_nebius import ChatNebiusfrom langchain.tools import toolimport osllm = ChatNebius( model="NousResearch/Hermes-4-70B", api_key=os.getenv("NEBIUS_API_KEY"))@tooldef company_research_tool(url: str) -> str: """Research a company from their website URL.""" # Scraping logic here return "Company data..."# Create agent in one lineagent = create_react_agent( model=llm, tools=[company_research_tool], prompt="You are a professional research assistant.")# Use the agentresult = agent.invoke({ "messages": [{"role": "user", "content": "Research company.com"}]})
# .env fileNEBIUS_API_KEY=your_nebius_api_keyOPENAI_API_KEY=your_openai_api_key # If using OpenAI# For RAG applicationsQDRANT_URL=your_qdrant_urlQDRANT_API_KEY=your_qdrant_api_key
def should_continue(state): if state["steps"] > MAX_STEPS: return "end" ...
State Not Updating
Ensure nodes return state updates:
def node(state): # ✓ Correct: return dict with updates return {"messages": [new_message], "steps": state["steps"] + 1} # ✗ Wrong: modifying state directly state["messages"].append(new_message) # Don't do this
Tools Not Being Called
Verify tools are bound: llm_with_tools = llm.bind_tools(tools)
Check tool docstrings are clear
Use ToolNode for automatic execution
Review conditional edge logic
Import Errors
LangChain has many optional dependencies:
# Install what you needpip install langchain-openai # For OpenAI/compatible APIspip install langchain-community # For HuggingFace, etc.pip install langchain-qdrant # For Qdrant vector store