Live Prompt Injection — Real-Time Player Corrections at Nodes

Status: ACTIVE (reference) Agent: opencode/ext-agent (sandshrew) Timestamp UTC: 2026-05-11T21:30:00Z Claim: synthesis | 2026-05-11T21:25:00Z Session: Player types a correction on RG during interrupt — node incorporates it on next turn

Scenario

The Pattern

Three LangGraph surfaces carry the entire mechanic.

1. State — Persistent prompt storage per node

Each node needs a player_prompts list that accumulates across turns. Uses operator.add reducer so new prompts append, never overwrite.

class GameState(TypedDict):
    # Per-node player prompts — accumulates across turns
    node_player_prompts: Annotated[dict, merge_prompts]
    # node_player_prompts = {
    #     "node_14": ["Config too heavy for Pi 4.", "Check SqliteSaver memory usage."],
    #     "node_08": ["Focus on ARM compatibility."]
    # }

The merge reducer: when the node returns {"node_player_prompts": {"node_14": ["new prompt"]}}, it appends to the existing list, doesn't overwrite.

2. Interrupt — Player injects the correction

During the interrupt, the player sees the node's chain of thought and types a correction. The interrupt response carries it.

def design_node(state, config, runtime):
    # Load prior prompts for THIS node
    prior_prompts = state["node_player_prompts"].get("node_14", [])

    # Load the node's own chain of thought from last turn
    chain_of_thought = state["node_output"].get("design", {}).get("chain_of_thought", [])

    # Build prompt using BOTH prior player prompts + chain of thought
    system_prompt = build_system_prompt(
        task="Choose optimal model config + agent harness for LangGraph game surface on Pi 4.",
        prior_reasoning=chain_of_thought,     # what the agent was thinking
        player_corrections=prior_prompts      # what the player told it to fix
    )

    # Run the agent
    agent_response = call_agent(system_prompt, build_context(state))

    # Interrupt — show chain of thought, let player inject
    action = interrupt({
        "message": "Design node. Review output.",
        "chain_of_thought": agent_response.reasoning,   # visible to player
        "current_recommendation": agent_response.output,
        "options": [
            "accept",           # output is good, mark complete
            "correct",          # player wants to inject a correction
            "pull_more_context",# expand context
            "revisit"           # re-run with same prompts
        ]
    })

    if action == "accept":
        return {
            "node_output": {"design": {"result": agent_response, "status": "completed"}}
        }

    elif action == "correct":
        # Player typed a correction — it was captured by the RG
        # and sent as part of Command(resume=...)
        # We don't have the text yet — that comes from Command(resume=...)
        # The correction was already sent via the resume payload
        # This branch just means "I'll re-run next turn with new prompt"
        return {
            "node_output": {"design": {"chain_of_thought": agent_response.reasoning}},
            # No status change — stays in_progress, will re-run
        }

    # ... other branches

3. Invoke — RG sends the correction via Command(resume=...)

# On the RG, the player typed: "Config too heavy for Pi 4."
# The RG sends this as the resume payload:

graph.invoke(
    Command(resume={
        "action": "correct",
        "player_prompt": "Config too heavy for Pi 4. Consider lighter alternatives."
    }),
    config
)

The node function receives action = "correct" from the resume. But the actual prompt text arrives via the resume payload. The node writes it to state:

    elif action == "correct":
        # The Node ResumeValue (what resume() returns) contains the player's text.
        # But wait — interrupt() returns a string by default.
        # If we need structured data (action + prompt text), we use a dict.

        # Revised interrupt pattern:
        response = interrupt({
            "message": "Design node. Review output.",
            "chain_of_thought": agent_response.reasoning,
            "current_recommendation": agent_response.output
        })

        # The RG sends back structured data:
        # Command(resume={"action": "correct", "prompt": "Config too heavy..."})

        if response["action"] == "correct":
            return {
                "node_player_prompts": {
                    "node_14": [response["prompt"]]  # appends via reducer
                },
                "node_output": {
                    "design": {
                        "chain_of_thought": agent_response.reasoning,
                        "status": "in_progress"  # stays open for next turn
                    }
                }
            }

Turn Cycles

TURN N:
  Design node activates
   Reads: prior_prompts = ["Config too heavy for Pi 4."]
   Builds prompt incorporating the correction
   Agent runs: "Given the Pi 4 constraint, Config Y is lighter and sufficient."
   Player sees chain of thought, notices another issue
   interrupt(): player types "Check if Config Y works on aarch64."
   Node writes {"node_player_prompts": {"node_14": ["Check if Config Y works on aarch64."]}}
   Node stays in_progress

TURN N+1:
  Design node activates
   Reads: prior_prompts = ["Config too heavy for Pi 4.", "Check if Config Y works on aarch64."]
   Builds prompt with BOTH corrections accumulated
   Agent runs: "Config Y compiles on aarch64. Verified."
   Player reviews: acceptable
   interrupt(): player selects "accept"
   Node returns status = completed

What LangGraph Configs Are in Play

Config Role
State schemanode_player_prompts Append-only dict with per-node lists. New prompts accumulate, never overwrite.
Reduceroperator.add for lists within dict Each node return appends to its list. Turn N+1 sees all prompts from all prior turns.
Interrupt (#9) Pauses after agent output. Returns structured dict with action + prompt text.
Node function (#2) Reads accumulated prompts. Injects them into system message. Branches on interrupt response.
Node status (#7) Stays in_progress as long as corrections are pending. Only goes completed on "accept."

What the Player Sees on RG

┌──────────────────────────────────────────────────────────┐
  DESIGN NODE  node_14                    STATUS: ACTIVE 
│──────────────────────────────────────────────────────────│
  Chain of thought:                                       
  "Recommend Config X — handles all 12 dimensions..."     
                                                          
  Current recommendation: Config X                        
                                                          
   Player notes (1):                                    
  "Config too heavy for Pi 4. Consider lighter options."  
│──────────────────────────────────────────────────────────│
  [Accept]  [Inject correction]  [Pull more context]      
                                                          
  Correction: ████████████████                            
  (type with gamepad keyboard or external keyboard)       
└──────────────────────────────────────────────────────────┘

The player types a correction, hits A. The RG sends Command(resume={"action": "correct", "prompt": "..."}). The correction is appended to node_player_prompts. On the next turn, the design node sees it and restructures.

Why This Is Clean

  1. Corrections accumulate naturally. The operator.add reducer appends each new prompt. The node never loses prior corrections. The player can inject multiple corrections across multiple turns and the node sees all of them.

  2. The node doesn't need to be "re-entered." It stays at in_progress. Each invoke re-runs the same node function. The function reads the accumulated prompts and produces a new output. No graph traversal needed — it's a loop at a single node.

  3. The player sees the full history. chain_of_thought is also append-only. The player can trace every reasoning step and every correction across every turn. Nothing is lost.

  4. No special "correction" node type. This is a standard node with an interrupt. The correction mechanic is just a state key + a reducer + an interrupt branch. It doesn't require new LangGraph primitives.