Skip to main content
A weather forecasting ReAct agent built from scratch using LangChain and LangGraph. Demonstrates stateful agent patterns with custom nodes, edges, and tool integration.

Overview

LangGraph is a framework for building stateful LLM applications, making it ideal for constructing ReAct (Reasoning and Acting) agents. This starter shows how to build a custom ReAct agent with full control over state management, tool execution, and decision flow.

Features

  • Custom ReAct agent implementation
  • Stateful graph-based architecture
  • Weather forecasting with Open-Meteo API
  • Conditional edge logic
  • Message history management
  • Full control over agent behavior

Prerequisites

Installation

1

Clone the repository

git clone https://github.com/Arindam200/awesome-ai-apps.git
cd starter_ai_agents/langchain_langgraph_starter
2

Install dependencies

pip install langgraph langchain litellm langchain-community geopy requests
3

Configure environment

Set your Nebius API key:
export NEBIUS_API_KEY=your_api_key_here

Implementation

State Definition

Define the agent’s state structure:
from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    """The state of the agent."""
    messages: Annotated[Sequence[BaseMessage], add_messages]
    number_of_steps: int

Tool Implementation

Create a weather forecasting tool:
from langchain_core.tools import tool
from geopy.geocoders import Nominatim
from pydantic import BaseModel, Field
import requests

geolocator = Nominatim(user_agent="weather-app")

class SearchInput(BaseModel):
    location: str = Field(description="The city and state, e.g., San Francisco")
    date: str = Field(description="the forecasting date format (yyyy-mm-dd)")

@tool("get_weather_forecast", args_schema=SearchInput, return_direct=True)
def get_weather_forecast(location: str, date: str):
    """Retrieves the weather using Open-Meteo API for a given location and date."""
    location_obj = geolocator.geocode(location)
    if location_obj:
        try:
            response = requests.get(
                f"https://api.open-meteo.com/v1/forecast?"
                f"latitude={location_obj.latitude}&longitude={location_obj.longitude}&"
                f"hourly=temperature_2m&start_date={date}&end_date={date}"
            )
            data = response.json()
            return {
                time: temp 
                for time, temp in zip(
                    data["hourly"]["time"], 
                    data["hourly"]["temperature_2m"]
                )
            }
        except Exception as e:
            return {"error": str(e)}
    else:
        return {"error": "Location not found"}

tools = [get_weather_forecast]

Model Setup

Initialize the LLM with Nebius AI:
from langchain_community.chat_models import ChatLiteLLM

# Create LLM instance
llm = ChatLiteLLM(model="nebius/Qwen/Qwen3-235B-A22B")

# Bind tools to the model
model = llm.bind_tools([get_weather_forecast])

Node Implementation

Define agent nodes for tool execution and model calls:
from langchain_core.messages import ToolMessage
from langchain_core.runnables import RunnableConfig

tools_by_name = {tool.name: tool for tool in tools}

# Tool execution node
def call_tool(state: AgentState):
    outputs = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
        outputs.append(
            ToolMessage(
                content=tool_result,
                name=tool_call["name"],
                tool_call_id=tool_call["id"],
            )
        )
    return {"messages": outputs}

# Model call node
def call_model(state: AgentState, config: RunnableConfig):
    response = model.invoke(state["messages"], config)
    return {"messages": [response]}

# Conditional edge
def should_continue(state: AgentState):
    messages = state["messages"]
    if not messages[-1].tool_calls:
        return "end"
    return "continue"

Graph Construction

Build the agent graph:
from langgraph.graph import StateGraph, END

# Define a new graph with our state
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("llm", call_model)
workflow.add_node("tools", call_tool)

# Set entrypoint
workflow.set_entry_point("llm")

# Add conditional edge
workflow.add_conditional_edges(
    "llm",
    should_continue,
    {
        "continue": "tools",
        "end": END,
    },
)

# Add edge from tools back to llm
workflow.add_edge("tools", "llm")

# Compile the graph
graph = workflow.compile()

Running the Agent

from datetime import datetime

# Create initial message
inputs = {
    "messages": [("user", f"What is the weather in Berlin on {datetime.today()}?")]
}

# Stream the agent's execution
for state in graph.stream(inputs, stream_mode="values"):
    last_message = state["messages"][-1]
    last_message.pretty_print()

Usage

You can run this code in a Jupyter notebook or Python script:
# Initial query
inputs = {"messages": [("user", "What is the weather in Berlin today?")]}

for state in graph.stream(inputs, stream_mode="values"):
    last_message = state["messages"][-1]
    last_message.pretty_print()

# Follow-up query (maintains conversation state)
state["messages"].append(("user", "Would it be warmer in Reykjavik?"))

for state in graph.stream(state, stream_mode="values"):
    last_message = state["messages"][-1]
    last_message.pretty_print()

Technical Details

LangGraph Architecture

State

Shared data structure tracking conversation and steps

Nodes

Logic units for LLM calls and tool execution

Edges

Control flow between nodes (fixed or conditional)

Key Components

State Management
  • Uses TypedDict for state structure
  • add_messages reducer for message list management
  • Tracks conversation history and metadata
Tool Integration
  • LangChain tool decorator for easy definition
  • Pydantic models for input validation
  • Direct return option for immediate responses
Graph Flow

Extending the Agent

Add More Tools

@tool
def get_historical_weather(location: str, date: str):
    """Get historical weather data"""
    # Implementation
    return data

tools = [get_weather_forecast, get_historical_weather]
model = llm.bind_tools(tools)

Add State Tracking

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]
    number_of_steps: int
    user_preferences: dict  # Track user preferences
    error_count: int        # Track errors

def call_model(state: AgentState, config: RunnableConfig):
    response = model.invoke(state["messages"], config)
    return {
        "messages": [response],
        "number_of_steps": state["number_of_steps"] + 1
    }

Add More Nodes

def validate_output(state: AgentState):
    """Validate the agent's output before returning"""
    last_message = state["messages"][-1]
    # Validation logic
    return state

workflow.add_node("validator", validate_output)
workflow.add_edge("llm", "validator")

Use Checkpointing

from langgraph.checkpoint.memory import MemorySaver

# Add memory for conversation persistence
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)

# Use with thread_id for conversation tracking
config = {"configurable": {"thread_id": "conversation-1"}}
for state in graph.stream(inputs, config, stream_mode="values"):
    last_message = state["messages"][-1]
    last_message.pretty_print()

Best Practices

  • Keep state minimal and focused
  • Use type hints for clarity
  • Leverage reducers like add_messages
  • Document state fields clearly
  • Keep nodes focused on single responsibilities
  • Handle errors gracefully within nodes
  • Return state updates, not full state
  • Use async nodes for I/O operations
  • Start simple, add complexity as needed
  • Use conditional edges for branching logic
  • Test each node independently
  • Visualize with draw_mermaid_png()

Visualization

Visualize your graph structure:
from IPython.display import Image, display

display(Image(graph.get_graph().draw_mermaid_png()))

Comparison with Pre-built Agents

LangGraph offers create_react_agent for quick setup, but building from scratch gives you:
  • Full control over state structure
  • Custom node logic
  • Flexible edge conditions
  • Better debugging capabilities
  • Easier customization

Next Steps

Advanced LangGraph

Build complex graph-based workflows

Multi-Agent Systems

Create agent teams with LangGraph

Build docs developers (and LLMs) love