Graph API
The Graph API is LangGraph's low-level, imperative API for building stateful, multi-actor agent workflows. It gives you fine-grained control over the topology of your application through a graph-based mental model.
Core Concepts: State, Nodes, Edges
Every Graph API application is built from three fundamental components:
| Component | What It Is | Role |
|---|---|---|
| State | A structured schema defining the data shape flowing through the graph | The memory of your application |
| Nodes | Python functions that read and produce state | The computational units |
| Edges | Connections defining how execution flows between nodes | The routing logic |
Message Passing and Super-Steps (Pregel-Inspired)
LangGraph's execution model is inspired by Google's Pregel framework for large-scale graph processing. It operates in super-steps:
- Plan — The runtime determines which nodes to execute based on incoming channel updates
- Execute — Selected nodes run in parallel, each producing updates to their output channels
- Update — Updates are applied to the state via reducers
This cycle repeats until no nodes are scheduled for execution. Each iteration is called a super-step. All nodes within a super-step that have pending input execute concurrently.
Message passing happens through channels — named buffers associated with each state key. Nodes read from their subscribed channels, compute, and write results back.
StateGraph Class
StateGraph is the primary class for defining graphs:
from langgraph.graph import StateGraph
graph = StateGraph(MyState)
The constructor accepts:
- state_schema — A TypedDict, dataclass, or Pydantic BaseModel defining the state shape
- config_schema — Optional TypedDict for runtime configuration (context)
Compiling Your Graph
Once your graph is defined, call .compile() to produce a CompiledGraph:
app = graph.compile()
app.invoke({"messages": [HumanMessage("hello")]})
Options for compile():
- checkpointer — Persist state between invocations (e.g., SqliteSaver, PostgresSaver)
- interrupt_before / interrupt_after — Nodes to pause at for human-in-the-loop
- store — Long-term memory store (BaseStore implementation)
- name — Name for the compiled graph
State
Defining State Shape (Schema)
Use one of three approaches:
TypedDict (most common):
from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
score: int
name: str
Dataclass:
from dataclasses import dataclass, field
@dataclass
class State:
messages: list = field(default_factory=list)
score: int = 0
Pydantic BaseModel (for validation):
from pydantic import BaseModel
class State(BaseModel):
messages: list = []
score: int = 0
Reducers
Reducers control how channel updates are merged with existing state. By default, the overwrite reducer is used — new values replace old values.
Default reducer (overwrite):
class State(TypedDict):
score: int # no annotation = overwrite
Custom reducers with Annotated:
class State(TypedDict):
messages: Annotated[list, add_messages] # append + message merging
score: Annotated[int, operator.add] # accumulated sum
urls: Annotated[list, custom_reducer] # custom merge logic
Key built-in reducers:
- add_messages — Appends messages, merges by ID, handles HumanMessage/AIMessage/ToolMessage
- operator.add — Numeric accumulation
- Custom functions of form (existing_value, new_value) -> merged_value
MessagesState
A pre-built state for chatbot/agent use:
from langgraph.graph import MessagesState
# Equivalent to:
class MessagesState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
Includes the add_messages reducer which:
- Appends new messages by default
- Merges messages with matching IDs (updating in place)
- For HumanMessage: appends if id not present, merges if id matches
- For AIMessage: merges by id, updates tool calls
- For ToolMessage: matches by tool_call_id against existing AIMessage.tool_calls
Working with Messages
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
# Adding messages
state = {"messages": [HumanMessage(content="hello")]}
result = app.invoke(state)
# result["messages"] contains the full message history
# Accessing latest
latest = result["messages"][-1]
# Filtering by type
ai_messages = [m for m in result["messages"] if isinstance(m, AIMessage)]
Serialization with add_messages
The add_messages reducer handles serialization gracefully:
- Messages are automatically serialized/deserialized with checkpointer
- Tool calls in AIMessages preserve their IDs and arguments
- Message IDs ensure idempotent updates across checkpoints
Nodes
Nodes are Python functions that receive the current state and return updates:
def my_node(state: State, config: RunnableConfig, *, runtime: Runtime) -> dict:
result = do_something(state)
return {"score": state["score"] + 1}
Node Signature
A node function receives:
- state — The current state (typed per your state schema)
- config — RunnableConfig with configurable fields, recursion_limit, thread_id, etc.
- runtime — Keyword-only Runtime object (see Runtime Context below)
Returns:
- dict — Partial state update (merged via reducers)
- None — No updates
Adding Nodes
graph.add_node("process", my_node)
graph.add_node("agent", call_model)
graph.add_node("tools", ToolNode(tools))
START and END Nodes
Special sentinel nodes:
from langgraph.graph import START, END
# Entry point
graph.add_edge(START, "agent")
# Terminal condition
graph.add_conditional_edges("agent", should_continue, {
"continue": "tools",
"end": END
})
Node Caching
Cache node results to skip re-execution on repeated calls:
from langgraph.cache import CachePolicy, InMemoryCache
graph = StateGraph(State)
graph.add_node("expensive_node", my_fn)
graph.compile(cache=InMemoryCache())
policy = CachePolicy(tags=["embeddings"], ttl_seconds=3600)
Options:
- InMemoryCache — Ephemeral, per-process cache
- CachePolicy — Tag-based matching with optional TTL
- Pass cache to compile() to enable globally
- Tags allow granular cache invalidation
Edges
Normal Edges
Direct, unconditional transitions:
graph.add_edge("node_a", "node_b") # A always goes to B
graph.add_edge(START, "agent") # Entry point
graph.add_edge("finish", END) # Terminal
Conditional Edges
Dynamic routing based on node output:
def route_decision(state: State) -> str:
if state["score"] > 10:
return "finish"
return "continue"
graph.add_conditional_edges("agent", route_decision, {
"continue": "tools",
"finish": END
})
The routing function receives the state and returns a key that maps to the next node. You can also return a list[Send] for fan-out routing (see Send API below).
Entry Point
# Standard entry
graph.add_edge(START, "agent")
# Conditional entry point
def select_entry(state: State) -> str:
return "new_user" if state.get("is_new") else "returning"
graph.add_conditional_edges(START, select_entry, {
"new_user": "onboarding",
"returning": "main"
})
Send API (Map-Reduce)
Execute a node multiple times in parallel with different arguments, then aggregate results. Powered by a special Send type:
from langgraph.graph import Send
def continue_to_jokes(state):
return [Send("generate_joke", {"topic": t}) for t in state["topics"]]
graph.add_node("generate_joke", joke_node)
graph.add_conditional_edges("fanout", continue_to_jokes, path_map={"generate_joke": "generate_joke"})
Each Send produces an independent parallel execution of the target node. Results are merged back into state via reducers after all parallel branches complete.
Pattern:
1. Fan-out node returns list[Send] via conditional edge
2. Target node runs in parallel for each Send
3. Results accumulate in a shared state key (requires a reducer like operator.add)
Command
Command provides simultaneous state updates and routing in a single return value:
from langgraph.types import Command
def my_node(state, config, *, runtime):
return Command(
update={"score": 10}, # State updates
goto="next_node", # Route to specific node
)
Command Parameters
| Parameter | Type | Description |
|---|---|---|
update |
dict |
State updates (merged via reducers) |
goto |
str or list[Send] |
Route to a node or fan-out via Send |
graph |
str |
Navigate to a parent graph by name |
resume |
any |
Resume from an interrupt() with a value |
Command Return Types
def my_node(state, config, *, runtime) -> Command[Literal["node_a", "node_b"]]:
return Command(goto="node_a")
The Literal type annotation constrains the allowed routing targets, providing type safety.
Runtime Context
Pass arbitrary runtime configuration via context_schema:
class MyConfig(TypedDict):
user_id: str
api_key: str
graph = StateGraph(State, config_schema=MyConfig)
Access in nodes via config:
def my_node(state, config: RunnableConfig):
user_id = config["configurable"]["user_id"]
Or via the runtime parameter:
def my_node(state, config, *, runtime: Runtime):
thread_id = runtime.config.configurable["thread_id"]
remaining = runtime.remaining_steps # recursion limit steps left
context = runtime.context # typed config
Recursion Limit and RemainingSteps
app.invoke(state, {"recursion_limit": 25}) # default: 25
# In node:
def my_node(state, config, *, runtime: Runtime):
if runtime.remaining_steps < 5:
return Command(goto="finish") # exit early
...
runtime.remaining_steps tracks how many super-steps remain before hitting the recursion limit. Use it to gracefully exit long-running graphs or implement budget-aware agents.
Graph Migrations
When state schema changes between versions, use migrations to handle older checkpoints:
from langgraph.checkpoint.base import BaseCheckpointMigration
class V1ToV2Migration(BaseCheckpointMigration):
def migrate(self, checkpoint):
if "old_field" in checkpoint["channel_values"]:
checkpoint["channel_values"]["new_field"] = checkpoint["channel_values"].pop("old_field")
return checkpoint
Pass to compile:
app = graph.compile(checkpointer=saver, migrations=[V1ToV2Migration()])
Migrations run transparently when loading older checkpoints from the checkpointer.
Visualization
Generate Mermaid diagrams or PNG images:
# Mermaid markdown
print(app.get_graph().draw_mermaid())
# PNG (requires graphviz + pygraphviz)
with open("graph.png", "wb") as f:
f.write(app.get_graph().draw_mermaid_png())
# Direct display in notebooks
from IPython.display import Image
Image(app.get_graph().draw_mermaid_png())
Full Example: Agent with Tools and Map-Reduce
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.types import Command, Send
from langgraph.prebuilt import ToolNode
from langchain_core.messages import HumanMessage, AIMessage
import operator
class AgentState(TypedDict):
messages: Annotated[list, add_messages]
topics: Annotated[list, operator.add]
jokes: Annotated[list, operator.add]
def chatbot(state: AgentState) -> dict:
# Calls LLM, returns AIMessage
response = AIMessage(content="I'll generate jokes on those topics")
return {"messages": [response]}
def should_continue(state: AgentState) -> Literal["fanout", "end"]:
if state.get("topics"):
return "fanout"
return "end"
def fan_out(state: AgentState) -> list[Send]:
return [Send("tell_joke", {"topic": t}) for t in state["topics"]]
def tell_joke(state, config, *, runtime) -> dict:
topic = runtime.context.get("topic")
return {"jokes": [f"Why did the {topic} cross the road?"]}
# Build graph
builder = StateGraph(AgentState)
builder.add_node("chatbot", chatbot)
builder.add_node("fanout", fan_out)
builder.add_node("tell_joke", tell_joke)
builder.add_edge(START, "chatbot")
builder.add_conditional_edges("chatbot", should_continue, {
"fanout": "fanout",
"end": END
})
builder.add_conditional_edges("fanout", fan_out, {"tell_joke": "tell_joke"})
builder.add_edge("tell_joke", END)
app = builder.compile()
result = app.invoke({
"messages": [HumanMessage(content="Tell me jokes about cats and dogs")],
"topics": ["cat", "dog"]
})
print(result["jokes"])
# ['Why did the cat cross the road?', 'Why did the dog cross the road?']
Summary
| Concept | What |
|---|---|
| StateGraph | Primary class — define state shape, add nodes/edges, compile |
| State | TypedDict/dataclass/Pydantic model with Annotated reducers |
| Nodes | Functions receiving (state, config, *, runtime), returning dict or Command |
| Edges | Normal (unconditional), Conditional (function-based routing), or Send (parallel fan-out) |
| Command | Combined state update + routing in one return value |
| Compile | Produces runnable CompiledGraph with optional checkpointer, store, migrations |
| Runtime | Access to config, remaining_steps, and context within nodes |
For the underlying execution model, see the Pregel guide. For the functional alternative, see the Functional API guide. For practical how-to patterns, see the Use Graph API guide.