Omni Config & Pi 4 Constraints — What Must Be Staged, What Layers In

Status: ACTIVE (reference) Agent: opencode/ext-agent (sandshrew) Timestamp UTC: 2026-05-11T22:05:00Z Claim: synthesis | 2026-05-11T22:00:00Z Session: Development order — can we compile a maximally flexible graph once and layer features through state? What are the Pi 4 limits?

The Question

Do we need to decide everything at the outset when we compile the graph, or can we layer features in later through state? Is there an "omni config" that compiles once and supports every prototype variation? Can the Pi 4 handle that?

Answer: State Schema Is the Omni Config

Compile the schema once. Layer the values forever.

LangGraph's compile/run split is sharp: at compile time you define the skeleton (state shape, node names, edges, reducers). At runtime you write values into the state keys defined by that skeleton. The skeleton is small and fixed. The values are limitless and layered.

The "omni config" is: define ALL state keys in the TypedDict at graph definition. Give every key a sensible default. Then layer in real configs, real access lists, real output summaries over time through gameplay — without ever recompiling the graph.

class GameState(TypedDict):
    # ── Fixed at compile (skeleton) ──
    # These keys exist. What goes INTO them is runtime.

    unit: dict                              # config #4 — model, harness, tools
    node_player_prompts: Annotated[dict, merge_prompts]  # live prompt injection
    nodes: Annotated[dict, merge_nodes]     # config #3, #6, #7 — phase, output, status
    routed_context: dict                    # cross-node curated context
    turn: int                               # turn counter
    phase: str                              # active phase
    mission: dict                           # mission objective and criteria

    # Every config dimension from the catalog has a key.
    # Every key starts with a default. Every default can be overridden at runtime.

What's Staged at Compile Time

Element Must Decide Now Can Layer Later Why
State keys Yes No — schema is fixed TypedDict is the contract between all 30 nodes
Node count & names Yes No 30 add_node() calls — can't add more without recompiling
Edge topology Yes Partially Static edges fixed. Conditional edges can reroute based on state.
Node functions Yes (signature) Yes (body) Function signature must match. What the function does can evolve.
Reducers Yes No Merge behavior set at compile. Dict/operator.add/add_messages.
Checkpointer type Yes No SqliteSaver or MemorySaver — set at compile.
Phase labels No Yes Just a string in state. Assign, reassign at runtime.
Agent config No Yes Just a dict in state. Change model/harness/tools mid-game.
Access lists No Yes Just lists in state. Add/remove entries at runtime.
Output summaries No Yes State dicts. Written when node completes.
Player prompts No Yes Appended to state lists. Never overwritten, always accumulate.
Menu structures No Yes RG reads state, builds menus dynamically. No compile dependency.
Interrupt options No Yes Node function builds interrupt at runtime from state.
Curate/point logic No Yes Agent logic inside node function. Evolves without recompile.

Can the Pi 4 Handle the Omni Config?

Yes, trivially. The compile-time skeleton is small. The runtime values are the bulk, and they grow incrementally through gameplay.

Metric Bound Reality for 30 Nodes
State schema size 12-15 keys TypedDict definition — ~50 lines of Python
Compile time 30 nodes + edges ~1-3 seconds on Pi 4 aarch64
State serialization Per invoke — JSON of active state <1KB for early turns, grows with history. ~10KB after 100 turns.
Checkpoint I/O SqliteSaver writes per super-step Sub-millisecond for <10KB writes
State key defaults One dict per node at graph init 30 × ~200 bytes = 6KB total
Graph memory Compiled graph in RAM <1MB. LangGraph graphs are function pointers + metadata, not data.

The Pi 4 has 4GB RAM. The d3-tui container uses some. Forgejo uses some. The game backend (LangGraph + HTTP bridge) will use <100MB. The constraint is not RAM. It's not CPU. It's not I/O. The constraint is purely: what do we implement first.

The Development Order Implication

Because the omni config compiles once and accepts layered values:

Session 1: Compile graph with 30 nodes, all state keys, stubs for functions.
           Test: graph compiles. Unit moves from one node to another. State updates.
           Deliverable: working skeleton.

Session 2: Add agent config to state. Wire one node to call the agent.
           Test: node_14 calls agent, writes output, interrupt pauses.
           Deliverable: one live node.

Session 3: Add output summaries. Node writes summary to state. RG renders it.
           Test: player sees summary on RG, zooms in on sections.
           Deliverable: output protocol working.

Session 4: Add cross-node context curation.
           Test: player routes curated context from node_14 to node_22.
           Deliverable: curation mechanic.

...and so on.

Each session adds values to keys that already exist in the schema. Nothing requires graph recompilation. The skeleton is constant. The muscles grow.

What Would Exceed Pi 4 Capacity

These would be problems, none of which apply to this prototype:

Would Break Why Not Our Problem
1,000+ nodes Compile time would be minutes We have 30
10+ concurrent invoke() streams SqliteSaver write contention Single unit, single thread
State objects >10MB JSON serialization + checkpoint I/O would choke Our state is <100KB even with full history
Streaming 60fps to RG HTTP + Tailscale latency would miss frames Turn-based, not real-time
LLM calls inside every invoke Kimi/MiniMax rate limits + latency Stubs for most nodes, agent calls only when work is done

Bottom Line

Define all 12 config dimensions in the state schema at compile. Give every key a default. Compile once. Then layer in features session by session through state values. The Pi 4 will not be the bottleneck. The schema is the omni config. The values are the game.