Wiring Probe Plan — Pi 4 ↔ RG40XXV Real-Time Bridge

Status: ACTIVE (probing needed) Agent: opencode/ext-agent (sandshrew) Timestamp UTC: 2026-05-11T23:00:00Z Claim: analysis | 2026-05-11T22:55:00Z Session: What wiring is needed between Pi and RG? How to probe it cleanly?

Prior Context

What We Know

Fact Detail
Network Both on Tailscale. Low-latency mesh, no port forwarding needed.
RG runtime Python 3.12 + pygame 2.5.2 + gamepad at /dev/input/js0. Can make HTTP calls.
Pi runtime Docker containers (d3-tui, forgejo). Python TBD (needs verification).
Architecture RG = thin rendering client. Pi = state authority (LangGraph + checkpointer).
Save state Canonical save = LangGraph SqliteSaver on Pi. RG "save" = screenshot + bookmark.

What We Don't Know

Unknown Why It Matters
Tailscale latency between RG and Pi Determines whether polling is fast enough or streaming is needed
State payload size at 36 nodes JSON serialization time + HTTP transfer time. Affects render loop.
Invoke roundtrip time Player presses A → invoke → state update → RG re-renders. Must feel responsive.
Polling vs streaming Polling is simpler but adds latency. Streaming is real-time but more complex.
HTTP framework choice FastAPI (async, lightweight) vs Flask (simpler, synchronous). Affects Pi resource usage.
Python on Pi What version? Is requests or aiohttp available? Can we install packages?
How RG detects state changes Poll every 500ms? WebSocket push? Server-sent events?

Probe Plan — 5 Tests, Ordered by Dependency

Probe 1: Network Baseline

# On Pi:
ping -c 20 100.119.202.114

# Expected: <10ms average over Tailscale mesh
# If >50ms: polling approach may feel laggy. Consider streaming.

Probe 2: Python Verification on Pi

# SSH into Pi
python3 --version
pip3 list | grep -E "fastapi|flask|requests|aiohttp|langgraph"
# What's already installed? What needs pip?

Probe 3: Minimal HTTP Roundtrip

# RG runs a test script:
import requests, time
start = time.time()
r = requests.post("http://100.120.38.37:8000/ping", json={"test": "hello"}, timeout=5)
elapsed = time.time() - start
print(f"Roundtrip: {elapsed*1000:.0f}ms, response: {r.json()}")

# Pi runs a minimal FastAPI server:
from fastapi import FastAPI
app = FastAPI()
@app.post("/ping")
async def ping(data: dict):
    return {"pong": data}

Measures: HTTP overhead over Tailscale. Baseline for all other probes.

Probe 4: State Fetch Test

# RG fetches a mock state of 36 nodes + 3 units
r = requests.get("http://100.120.38.37:8000/state")
elapsed = time.time() - start
state = r.json()
print(f"Fetch: {elapsed*1000:.0f}ms, payload: {len(r.content)} bytes, nodes: {len(state['nodes'])}")

Measures: how large is a realistic state payload over HTTP? Is JSON serialization a bottleneck?

Probe 5: Invoke + Poll Loop

# RG invokes a node (moves a unit), then polls until state changes
old_state = get_state()
invoke({"unit": "rif", "action": "move", "target": "17"})
while True:
    new_state = get_state()
    if new_state["nodes"]["17"]["status"] != old_state["nodes"]["17"]["status"]:
        break  # state changed — node completed
    time.sleep(0.5)
# Total time from invoke to state update visible

Measures: end-to-end latency of the full invoke cycle. Is 500ms polling fast enough?

Save State Protocol

The Pi owns the canonical save. The RG saves a visual bookmark only.

On the Pi (LangGraph)

# LangGraph checkpointer automatically saves after every super-step
# No explicit "save" needed — state is always persisted

# Player can also create a named checkpoint:
graph.put_state(config, {"checkpoint_name": "Before design review"})

# Later: time-travel to that checkpoint
state = graph.get_state(config, checkpoint_id="Before design review")

On the RG (Visual Bookmark)

Save → RG takes screenshot (640×480 pygame surface → PNG)
     → RG writes save metadata: {hex, unit, turn, timestamp}
     → RG sends "create checkpoint" to Pi → Pi names the checkpoint
     → RG shows: "Saved. Checkpoint: Turn 4, Rif at Hex 17."

The RG save is a visual bookmark — "this is where I was looking." The Pi save is the state snapshot — "this is exactly where the game was." The RG save helps the player resume. The Pi save IS the resume.

Wiring Architecture (After Probes Confirm)

┌─────────────────────────────────────────────────────────┐
│  RG40XXV                    │  Pi 4                      │
│                             │                            │
│  Wargame Engine             │  FastAPI (port 8000)       │
│  ├── Event loop             │  ├── GET  /state           │
│  ├── Hex grid rendering     │  │    → returns full state  │
│  ├── Sprite management      │  ├── POST /invoke          │
│  ├── Input → messaging      │  │    → runs graph, returns │
│  │   └── on_click(hex)      │  │      updated state      │
│  │       → POST /invoke     │  ├── POST /checkpoint      │
│  │       → poll GET /state  │  │    → creates named save  │
│  │       → re-render        │  ├── GET  /health          │
│  └── Save button            │  │    → connectivity check  │
│      → screenshot PNG       │  └── LangGraph graph       │
│      → POST /checkpoint     │      ├── 36 nodes          │
│                             │      ├── SqliteSaver        │
│                             │      └── Agent calls        │
└─────────────────────────────────────────────────────────┘

The RG never holds LangGraph state. It fetches what it needs to render. It sends what the player does. The Pi is the sole authority.

Answers Needed Before Build

Question Probe
What's the Tailscale latency? Probe 1
Is Python available on Pi? What packages? Probe 2
What's the HTTP roundtrip time? Probe 3
How large is a 36-node state payload? Probe 4
What's the end-to-end invoke + poll latency? Probe 5
Polling or streaming? Decided after Probe 4 + Probe 5
FastAPI or Flask? Decided after Probe 2
---

Probe Results (Live — 2026-05-11T23:05:00Z)

Probe 1: Network Baseline ✅

10 packets, 0% loss
min: 3.1ms | avg: 8.6ms | max: 49.6ms (first ping only)
Steady state: 3-5ms

Result: Sub-5ms latency. Polling is the correct approach. No WebSocket/streaming complexity needed. Even 50ms polling intervals would feel instant.

Probe 2: Python on Pi ✅

Check Result
Python 3.13.5 (newer than RG's 3.12.8)
requests 2.32.3 ✅ already installed
Flask ❌ not installed (types present, no framework)
FastAPI ❌ not installed
uvicorn ❌ not installed
LangGraph ❌ not installed
RAM 3.7GB total, 582MB used, 3.1GB available

Result: Strong baseline. Need pip install langgraph fastapi uvicorn. RAM is plentiful.

Decisions From Probes

Decision Answer Based On
Polling or streaming? Polling 3-5ms latency — streaming adds complexity for no benefit
HTTP framework? FastAPI + uvicorn Lightest Python web stack. Async, auto-docs.
Polling interval? 100ms (10 polls/sec) 3-5ms latency means 100ms polling is imperceptible
State payload concern? Not a bottleneck ~15KB JSON over 3ms link = negligible