Q-02 Convergence: LangGraph on Bun + Hermes on Bun

Status: ACTIVE (converging on decision) Agent: opencode/ext-agent (sandshrew) Timestamp UTC: 2026-05-12T02:30:00Z Session: Discovery that LangGraph JS has full feature parity — collapses Q-01 and Q-02

Discovery: LangGraph on Bun Has Full Feature Parity

LangGraph's JavaScript/TypeScript implementation (@langchain/langgraph) supports every feature we mapped for the Python version:

Feature Python API Bun/JS API Status
StateGraph StateGraph(State) new StateGraph(State)
State schema TypedDict + Annotated StateSchema + Zod
Nodes add_node("name", fn) .addNode("name", fn)
Static edges add_edge("a", "b") .addEdge("a", "b")
Conditional edges add_conditional_edges("a", fn) .addConditionalEdges("a", fn)
Command (update+goto) Command(update=..., goto=...) new Command({ update, goto })
Command (resume) Command(resume=...) new Command({ resume })
Command.PARENT Command(goto=..., graph=Command.PARENT) new Command({ goto, graph: Command.PARENT })
Send (fan-out) Send("node", state) new Send("node", state)
interrupt() interrupt("prompt") interrupt("prompt")
Checkpointer SqliteSaver/MemorySaver SqliteSaver/MemorySaver
Streaming astream() astream()
Reducers operator.add, custom ReducedValue with custom reducer
MessagesValue add_messages MessagesValue (same behavior)
Runtime context runtime.context config.context
Recursion limit config recursion_limit config recursionLimit
Node caching CachePolicy(ttl=...) cachePolicy: { ttl }

Full parity confirmed. Everything in the 12-dimension node config catalog, every pattern we designed (poller/stager, curation, live prompt injection, nested menus), every state key — all implementable in TypeScript on Bun.

Collapse of Q-01 and Q-02

With LangGraph on Bun, the game backend becomes one language, one runtime, one process:

Bun process on Pi:
  ├── LangGraph (@langchain/langgraph)      TypeScript
  ├── HTTP server (Bun.serve() or Hono)     TypeScript
  ├── Agent harness (Hermes / Pi agents)    TypeScript, same process
  └── SqliteSaver (checkpointer)            @langchain/langgraph-checkpoint

No Python. No venv. No language boundary. No subprocess overhead. No JSON serialization middleman between graph and agent. One bun run command.

Agent Harness Trade-offs (Re-evaluated)

With zero-boundary integration, the comparison shifts:

Harness Pros Cons Zero-boundary?
Hermes on Bun Rich tool stack (read/write/bash/web_fetch). Multimodal. Free Qwen 3.6+ via OAuth (Hermes Portal). Mature, tested tools. Heavier than Pi agents. ✅ — same Bun process
Pi agents on Bun Lightweight, trackable, modular. Already running in d3-tui. Would need extensions built or ported. Risky for first iteration — may lack tools needed for real work. ✅ — same Bun process
OpenCode Generous model limits. Your daily driver. Mac-hosted — always HTTP boundary. YOLO permissions needed for headless operation. ❌ — separate process, HTTP boundary
Raw model API Simplest. No harness. No tools. Build everything from scratch. ✅ but tool-less

MjF's Considerations (Session Notes)

Assessment

Hermes on Bun with Qwen 3.6+ is the strongest candidate for the prototype.

Reasons: 1. Zero-boundary. Runs in the same Bun process as LangGraph. No subprocess, no HTTP, no JSON middleman. Import and call. 2. Mature tool stack. Read, write, bash, web_fetch, multimodal — tools that are tested and working, not built from scratch. 3. Free compute. Qwen 3.6+ via Hermes Portal OAuth provides unlimited model access. No API key cost, no rate limit anxiety during development. 4. One runtime. Bun for everything. No Python at all. The arsenal at /opt/pearl/bin/ has one entry: Bun. Q-01 simplifies dramatically.

Pi agents remain as a future layer — they can be wired in later when their extension ecosystem matures. For the prototype, the goal is to prove the graph architecture works. Hermes provides the richest tool surface for the least integration effort.