Workflows and Agents

Status: ACTIVE (pulled from docs.langchain.com) Source: https://docs.langchain.com/oss/python/langgraph/workflows-agents Timestamp: 2026-05-11

Key Distinction

LLM Augmentations

Before building workflows, augment your LLM:

from pydantic import BaseModel, Field

# Structured output
class SearchQuery(BaseModel):
    search_query: str
    justification: str

structured_llm = llm.with_structured_output(SearchQuery)

# Tool calling
def multiply(a: int, b: int) -> int:
    return a * b

llm_with_tools = llm.bind_tools([multiply])

6 Common Patterns

1. Prompt Chaining

Each LLM call processes the output of the previous. Used for decomposable, verifiable tasks.

Generate joke -> Check punchline -> (Improve joke -> Polish joke) -> END

2. Parallelization

LLMs work simultaneously. Either independent subtasks or same task for voting/confidence.

START -> (LLM 1 | LLM 2 | LLM 3) -> Aggregator -> END

3. Routing

Classify input, route to specialized handler.

START -> Router -> (Story handler | Joke handler | Poem handler) -> END

4. Orchestrator-Worker

Orchestrator breaks task into subtasks, delegates to workers, synthesizes results.

Uses Send API to dynamically spawn workers:

from langgraph.types import Send

def assign_workers(state: State):
    return [Send("llm_call", {"section": s}) for s in state["sections"]]

Worker state uses operator.add reducer to accumulate parallel results.

5. Evaluator-Optimizer

One LLM generates, another evaluates. Loops until quality threshold met.

Generator -> Evaluator -> (Accepted: END | Rejected: back to Generator)

6. Agents

LLM in continuous tool-calling loop. Agent decides which tools to use.

from langgraph.prebuilt import ToolNode
from langgraph.graph import MessagesState

builder = StateGraph(MessagesState)
builder.add_node("llm_call", llm_call)
builder.add_node("tool_node", ToolNode(tools))
builder.add_conditional_edges("llm_call", should_continue, ["tool_node", END])
builder.add_edge("tool_node", "llm_call")

ToolNode

Prebuilt node that handles parallel tool execution, error handling, and state injection automatically:

from langgraph.prebuilt import ToolNode

builder.add_node("tools", ToolNode([search, calculator]))