LangGraph Soft Mechanics — Context, Prompts, Comms

Status: ACTIVE (reference) Agent: opencode/ext-agent (sandshrew) Timestamp UTC: 2026-05-11T21:10:00Z Claim: synthesis | 2026-05-11T21:05:00Z Session: Mapping "soft" concepts (context, system prompts, prompting, communication) to LangGraph native mechanics

Prior Context


The Hard Truth

LangGraph has NO built-in concepts for: - Context windows - System prompts - Prompt templates - Message routing between nodes - Agent communication protocols

LangGraph has exactly ONE communication primitive: state.

Everything else — context, prompts, comms — is something YOU build on top of state. LangGraph provides the graph; you provide the semantics.


1. Context = State Keys You Invent

There is no LangGraph "context" object. There's only state.

When we talk about "context" in the game surface — what a unit knows, what it can see, what research preceded this node — we mean: state keys that the node function reads and passes to the LLM.

def node_14(state, config, runtime):
    # "Context" = state keys we decided mean "context"
    unit_context = state["unit"]["context_access"]      # ["node_01_output", "wiki/x.md"]
    mission = state["mission"]                          # {"objective": "...", "success_criteria": [...]}
    history = state["unit"]["traversal_history"]         # [{"node": "node_01", "output": "..."}]

    # These are just state keys. LangGraph doesn't know they're "context."
    # We pass them to the LLM — that's what makes them context.

    prompt = f"""
    Mission: {mission['objective']}
    Available context: {fetch_context(unit_context)}
    History: {history}

    What should the unit do at node_14?
    """

    response = call_llm(prompt)
    return {"unit_output": response}

The access gating we've been discussing (context_access / context_block) is just a list of state key references. The node function reads that list, fetches those state values, and bundles them into the LLM prompt. LangGraph doesn't enforce access — your code does.


2. System Prompts = Strings You Build in the Node Function

LangGraph has no system_prompt parameter. No prompt registry. No prompt management.

A "system prompt" is whatever string you prepend to the LLM call inside the node function:

def node_14(state, config, runtime):
    unit = state["unit"]
    phase = state["nodes"]["node_14"]["phase"]

    # System prompt — built from state, not from LangGraph
    system_prompt = f"""
    You are unit {unit['name']}, operating in {phase} phase.
    Your role: {unit['role']}.
    Available tools: {unit['config']['tools']}.
    Output format: write to {state['nodes']['node_14']['output_locale']}.
    Context window: see attached context from prior nodes.
    """

    # The LLM call — system prompt is just a message
    messages = [
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": f"Task: {state['mission']['objective']}"},
        *state.get("messages", [])  # prior conversation
    ]

    response = call_llm(messages)
    return {"messages": [{"role": "assistant", "content": response}]}

You can make system prompts dynamic — read from state, inject per-node config, include access-gated context. But LangGraph has no opinion. It's just a string you construct before calling the LLM.


3. Communication Between Nodes = State Updates

There is NO direct messaging between nodes in LangGraph. No node_a.send_message(node_b, "hello"). No event bus. No pub/sub.

The ONLY way nodes communicate is through state:

Node A runs → returns {"messages": [AIMessage("research complete")]} 
  → state updated via reducer 
  → Node B runs → reads state["messages"] 
  → sees Node A's output

That's it. State is the comms channel. The graph topology determines WHICH nodes see each other's output — if Node B is downstream of Node A, it reads what Node A wrote. If Node B is in a parallel branch, it doesn't see Node A's output unless they share state keys.

# Node A writes to state
def node_a(state):
    return {
        "research_findings": ["Wargame Engine is viable on aarch64"],
        "messages": [AIMessage(content="Research phase complete. 1 finding.")]
    }

# Node B reads Node A's output from state
def node_b(state):
    findings = state["research_findings"]  # ["Wargame Engine is viable on aarch64"]
    prior_messages = state["messages"]     # [..., AIMessage("Research phase complete...")]

    # Use findings as context for design work
    prompt = f"Based on these findings: {findings}, design the rendering layer."
    response = call_llm(prompt)
    return {"design_output": response}

4. The Reducers Control How State Merges

When a node returns a state update, LangGraph merges it into the current state using reducers. This is the mechanism that controls whether context accumulates or gets overwritten.

class GameState(TypedDict):
    # Default reducer = OVERWRITE. Latest value wins.
    current_node: str                    # "node_14" overwrites "node_13"
    unit: dict                           # Full unit state, overwritten

    # add_messages = APPEND + deduplicate by message ID
    messages: Annotated[list, add_messages]  # All LLM messages accumulate

    # operator.add = APPEND lists
    traversal_history: Annotated[list, operator.add]  # ["node_01", "node_02", "node_14"]

    # Custom reducer = user-defined merge logic
    nodes: Annotated[dict, merge_node_status]  # Only changes to "status" field merge

This is critical: the reducer determines whether context accumulates or gets replaced. messages with add_messages grows with every node. current_node with default reducer only ever shows the latest. traversal_history with operator.add is an ever-growing log.


5. The Practical Loop — Put Together

Here's what actually happens when a unit moves from node_13 to node_14:

┌─ Player presses A on RG ──────────────────────────────────────────────┐
                                                                        
  1. RG sends HTTP POST to Pi:                                         
     {"action": "move", "unit": "rif", "target": "node_14"}            
                                                                        
  2. Pi receives  LangGraph graph.invoke(input, config)               
                                                                        
  3. LangGraph enters node_14:                                         
     a. Resolves effective config (base + override)                    
     b. Resolves access lists (fetch allowed context, skip blocked)    
     c. Builds system prompt from state (unit role, phase, mission)    
     d. Builds user prompt from state (task, context, history)         
     e. Calls agent harness with prompt + tools                        
     f. Agent returns result                                           
     g. Writes output locale to state                                  
     h. Calls interrupt(): "Work complete. Where next?"                
                                                                        
  4. LangGraph returns interrupt value to HTTP bridge                  
                                                                        
  5. Pi sends HTTP response to RG:                                     
     {"state": {...}, "interrupt": {"message": "Where next?", ...}}    
                                                                        
  6. RG renders updated state + interrupt prompt                       
                                                                        
  7. Repeat                                                             
└────────────────────────────────────────────────────────────────────────┘

Everything LangGraph does is mechanical: receive state, run function, merge updates, follow edges, checkpoint. Everything "soft" — context, prompts, comms — is your code inside the node function, reading from and writing to state.


Mapping Table: Soft Concepts → LangGraph Mechanics

Soft Concept LangGraph Reality Where It Lives
Context State keys you define. Node reads them, passes to LLM. State dict
System prompt String you build in the node function before calling LLM. Node function body
User prompt / task State keys + constructed string. State + node function
Communication between nodes State updates via reducers. Node A writes, Node B reads. State dict + reducer
Access gating Lists in state. Node function fetches only listed sources. State + node function
Context accumulation Reducer controls merge behavior. add_messages = append, default = overwrite. State schema
Agent config State keys read by node function to select model/harness. State dict
Interrupt / player choice interrupt() pauses graph. Command(resume=...) carries choice back. Node function + invoke
Traversal history Appended list in state with operator.add reducer. State dict
Output locales State keys written by node function pointing to wiki/Forgejo. State dict

What This Means for the Prototype

Every "configuration" we've discussed — agent config, access lists, output locales, phase labels — is just a state key. The node function reads it, acts on it, writes back. LangGraph doesn't enforce any of it. Your code does.

The system prompt for each node is built from state at runtime. It includes: - The unit's role and identity (from state["unit"]) - The node's phase label (from state["nodes"][node_name]["phase"]) - The mission objective (from state["mission"]) - The access-gated context from prior nodes (from fetch_context()) - The unit's traversal history (from state["unit"]["traversal_history"])

None of this is "configured" in LangGraph. It's constructed in Python, inside the node function, from state. LangGraph provides the skeleton (graph + state + edges). You provide the muscles (prompts + context + logic).