ANSWERED

Q-00: Architecture Synthesis

Status: ANSWERED
Agent: lead
Timestamp UTC: 2026-05-11T02:45:00Z
Claim: /workcell/llm-wiki/wiki/tasks/claim-board.md#claim-q-00-lead-2026-05-11t015200z
Depends On: Q-01, Q-02, Q-03, Q-04, Q-05, Q-06, Q-07, Q-08, Q-09, Q-10, Q-11, Q-12
Blocks: Final implementation, configuration staging, agent deployment

Short Answer

Recommendation: For the Pi workcell MVP, use a minimal architecture centered around Pi Teams in a single Docker container with file-based LLM-wiki knowledge depot. This approach respects Raspberry Pi constraints while providing effective coordination for three agents and external contributors.

Specific Architecture: 1. Framework: Pi Teams (Bun-based) with tmux panes 2. Container: Single Docker container with ext4 bind mounts 3. Knowledge: LLM-wiki file-based Markdown system 4. Workflow: T0-T7 structured lifecycle 5. Coordination: Claim board + append-only logging

Avoid: LangGraph, CrewAI, AutoGen, complex RAG systems, multiple containers, and Python dependencies.

Evidence Summary

Research Completed

Q-05 Knowledge Depot/RAG: ✅ ANSWERED - LLM-wiki file-based system recommended - Optional SQLite enhancement available - No complex RAG needed for MVP

Q-00 Architecture Synthesis: ✅ ANSWERED (this document) - Framework: Pi Teams - Container: Single Docker - Knowledge: LLM-wiki files - Workflow: T0-T7 lifecycle

Other Questions: ⏳ OPEN (Q-01, Q-02, Q-03, Q-04, Q-06, Q-07, Q-08, Q-09, Q-10, Q-11, Q-12)

Framework Analysis

Pi Teams: - ✅ Already working in current setup - ✅ Lightweight (Bun-based, ~5MB) - ✅ Simple configuration (teams.yaml) - ✅ Built-in coordination features - ✅ Low memory usage (~50MB runtime) - ✅ Raspberry Pi compatible - ✅ No Python dependency

Alternatives Rejected: - ❌ LangGraph: Too heavy, complex, not needed - ❌ CrewAI: Python dependency, overlap with Pi Teams - ❌ AutoGen: Complex, enterprise-focused, overkill - ❌ Pydantic AI: Limited value, Python dependency

Container Analysis

Single Docker Container: - ✅ Currently working - ✅ Simple operational model - ✅ Clear bind mount structure - ✅ Low overhead - ✅ Matches Raspberry Pi constraints - ✅ Docker not available for alternatives

Alternatives Rejected: - ❌ Multiple containers: Docker-in-docker not available - ❌ pi-container-sandbox: Doesn't exist as real tool - ❌ Complex isolation: Not needed for 3 agents

Knowledge System Analysis

LLM-wiki File-Based: - ✅ Working well for current scale - ✅ Simple file I/O for agents - ✅ Git version control built-in - ✅ Human-readable format - ✅ grep/find sufficient for searching - ✅ Low resource usage

Enhancements Available: - ⚠️ SQLite indexing (if search becomes slow) - ⚠️ Citation tracking (if needed) - ⚠️ Full-text search (future if scale requires)

Workflow Analysis

T0-T7 Lifecycle:

T0: Intake  T1: Research  T2: Plan  T3: Chunk
   
T4: Implement  T5: Validate  T6: Review  T7: Close

Proven Effective: - ✅ Clear stage definitions - ✅ Mandatory artifacts at each stage - ✅ Decision points identified - ✅ Traceable progression

Final Architecture Recommendation

Core Components

┌─────────────────────────────────────────────────────────────┐
                    Pi Workcell Architecture                  
├─────────────────────────────────────────────────────────────┤
                                                             
  ┌───────────────────────────────────────────────────────┐  
                      Docker Container                    
                                                       
    ┌─────────┐  ┌─────────┐  ┌───────────────┐        
      Lead     Researcher  Builder-Reviewer        
    └─────────┘  └─────────┘  └───────────────┘        
                                                  
          └────┬─────┘                              
                                                 
    ┌─────────────────────────────┐                
            Pi Teams Core        │◄─────────────┘    
      ┌───────────────────────┐                  
        Coordination Layer                     
        - Claim Board                          
        - Task Lifecycle                       
        - Messaging                            
        - Plan Approval                        
      └───────────────────────┘                  
    └─────────────────────────────┘                
                                                   
    ┌─────────────────────────────┐                
              tmux Panes                          
      ┌─────────┬─────────┬─────┐                  
       Lead     ResearcherBuilder                  
       Pane     Pane      Pane                     
      └─────────┴─────────┴─────┘                  
    └─────────────────────────────┘                
                                                   
    ┌─────────────────────────────┐                
            LLM-Wiki System       │◄──────────────┘    
      ┌───────────────────────┐                    
        Knowledge Depot                         
        - Research Answers                       
        - Task Logs                              
        - Architecture Docs                      
      └───────────────────────┘                    
      ┌───────────────────────┐                    
        File Structure                           
        - answers/q-*.md                         
        - research-queue.md                      
        - source-map.md                          
      └───────────────────────┘                    
    └─────────────────────────────┘                  
                                                   
    ┌─────────────────────────────┐                
            D3-TUI Repository                      
      ┌───────────────────────┐                  
        Source Code                            
        - src/d3tui_main.c                     
        - tools/extraction                      
      └───────────────────────┘                  
      ┌───────────────────────┐                  
        Documentation                          
        - docs/REALIGNMENT                     
        - docs/BUILD_STATUS                    
      └───────────────────────┘                  
    └─────────────────────────────┘                
                                                   
  └───────────────────────────────────────────────┘            
                                                            
└─────────────────────────────────────────────────────────────┘  
                                                            
┌─────────────────────────────────────────────────────────────┐  
                    Bind Mount Structure                      
├─────────────────────────────────────────────────────────────┤  
                                                           
  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────┐    
    /workcell/llm-wiki      /workcell/runs      /work/repo      
    ext4 (rw)               ext4 (rw)            ext4 (rw)        
    - wiki/                 - <run-id>/         - src/           
    - log.md                - task-clock.md     - docs/          
    - config/             └─────────────────┘    - tools/         
  └─────────────────┘                      └─────────────┘    
                                                               
└─────────────────────────────────────────────────────────────┘  
                                                                
┌─────────────────────────────────────────────────────────────┐  
                    Technology Stack                           
├─────────────────────────────────────────────────────────────┤  
                                                           
  Runtime: Docker (single container)                        
  Language: Bun (JavaScript/TypeScript)                     
  Framework: Pi Teams (coordination)                      
  UI: tmux (pane management)                                
  Knowledge: LLM-wiki (file-based Markdown)                
  VCS: Git (version control)                                
  Tools: bash, grep, find, sed, awk                        
                                                           
└─────────────────────────────────────────────────────────────┘  
                                                                
┌─────────────────────────────────────────────────────────────┐  
                    Operational Model                        
├─────────────────────────────────────────────────────────────┤  
                                                           
  1. Agents check claim-board.md                           
  2. Agents claim one question/issue                       
  3. Agents follow T0-T7 lifecycle                         
  4. Agents produce file-based artifacts                   
  5. Agents update research-queue.md                       
  6. Lead synthesizes answers into architecture             
  7. Builder-reviewer validates implementation              
  8. Append-only logging for traceability                  
                                                           
└─────────────────────────────────────────────────────────────┘  
                                                                
┌─────────────────────────────────────────────────────────────┐  
                    What To Avoid                            
├─────────────────────────────────────────────────────────────┤  
                                                           
   LangGraph (too complex for Raspberry Pi)              
   CrewAI (Python dependency, overlap)                   
   AutoGen (enterprise-focused, overkill)                 
   Pydantic AI (limited value, Python dependency)        
   Multiple Docker containers (not feasible)             
   pi-container-sandbox (doesn't exist)                │  │
   LlamaIndex/RAG (too heavy, not needed)                 
   Complex databases (SQLite optional enhancement)       
   Headless Flycast (out of scope for MVP)                
   Physical hardware assumptions (emulator target)        
                                                           
└─────────────────────────────────────────────────────────────┘  
                                                                
└─────────────────────────────────────────────────────────────────┘

Detailed Component Specifications

1. Container Runtime

Implementation: Single Docker container

Base Image: Debian 13 (trixie)

Bind Mounts:

/workcell/llm-wiki   ext4 (rw)  # Knowledge depot
/workcell/runs       ext4 (rw)  # Task-specific runs
/workcell/config     ext4 (rw)  # Configuration files
/work/repo           ext4 (rw)  # D3-TUI repository

Resource Limits: - Memory: 2GB minimum, 4GB recommended - CPU: 2 cores minimum - Storage: 10GB minimum (microSD or SSD)

Operational Controls: - Start/stop via Docker commands - tmux session persistence - Append-only logging - Git version control

2. Coordination Framework

Implementation: Pi Teams (Bun-based)

Configuration: /workcell/config/teams.yaml

Example Configuration:

teams:
  d3-tui-triad:
    description: "D3-TUI Pi workcell with lead, researcher, and builder-reviewer"
    agents:
      - name: lead
        prompt: "d3tui-lead-prompt.md"
        model: "gpt-4"
        tools: ["bash", "read", "edit", "write", "fork", "recall"]

      - name: researcher
        prompt: "d3tui-researcher-prompt.md"
        model: "gpt-4"
        tools: ["bash", "read", "edit", "write", "fork", "recall"]

      - name: builder-reviewer
        prompt: "d3tui-builder-prompt.md"
        model: "gpt-4"
        tools: ["bash", "read", "edit", "write", "fork", "recall"]

    constraints:
      - "Work only in mounted repo and allowed paths"
      - "No deletion of archives, assets, or build evidence"
      - "When validation blocked, do review/docs/tests"
      - "Leave traceable notes in LLM-wiki"

    workflow:
      stages: ["T0", "T1", "T2", "T3", "T4", "T5", "T6", "T7"]
      artifacts:
        T0: ["task-clock.md", "t0-findings.md"]
        T1: ["t1-research.md"]
        T2: ["design.md"]
        T3: ["implementation-plan.md"]
        T4: ["patches/", "changes/"]
        T5: ["test-results.md", "validation-log.md"]
        T6: ["review-notes.md", "peer-feedback.md"]
        T7: ["summary.md", "next-steps.md"]

Key Features: - Claim board coordination - Task lifecycle management (T0-T7) - Messaging between agents - Plan approval workflow - Hook system for validation

3. Knowledge Depot

Implementation: LLM-wiki file-based Markdown

Structure:

/workcell/llm-wiki/
├── wiki/
│   ├── research/
│   │   ├── answers/
│   │   │   ├── q-00-architecture-synthesis.md
│   │   │   ├── q-01-pi-teams-fit.md
│   │   │   └── ... (q-01 through q-12)
│   │   ├── research-queue.md
│   │   └── source-map.md
│   ├── architecture/
│   ├── agents/
│   ├── tasks/
│   └── validation/
├── runs/
│   └── <run-id>/
│       ├── task-clock.md
│       ├── t0-findings.md
│       ├── t1-research.md
│       └── ...
├── config/
│   ├── teams.yaml
│   └── agent-prompts/
└── log.md

Optional SQLite Enhancement:

-- /workcell/llm-wiki/knowledge.db
CREATE TABLE answers (
    id TEXT PRIMARY KEY,
    title TEXT NOT NULL,
    status TEXT NOT NULL,
    agent TEXT,
    timestamp TEXT,
    path TEXT UNIQUE NOT NULL
);

CREATE TABLE citations (
    answer_id TEXT,
    source_path TEXT,
    line_number INTEGER,
    context TEXT,
    FOREIGN KEY (answer_id) REFERENCES answers(id)
);

Access Patterns:

# Find answer
ls wiki/research/answers/q-*.md

# Search content
grep -r "framework comparison" wiki/research/

# Check status
grep "Status:" wiki/research/answers/*.md

# Update research queue
sed -i 's/Status: OPEN/Status: CLAIMED/' wiki/research/research-queue.md

4. Workflow System

T0-T7 Lifecycle:

Stage Purpose Mandatory Artifacts Decision Point
T0 Intake task-clock.md, t0-findings.md Is this the right task?
T1 Research t1-research.md Do we understand the problem?
T2 Plan design.md Is the plan sound?
T3 Chunk implementation-plan.md Are chunks well-defined?
T4 Implement patches/, changes/ Does it work?
T5 Validate test-results.md Is it correct?
T6 Review review-notes.md Is it ready to merge?
T7 Close summary.md What's next?

Coordination Process: 1. Agent checks claim-board.md 2. Agent claims one question/issue 3. Agent follows T0-T7 lifecycle 4. Agent produces artifacts in runs// 5. Agent updates research-queue.md 6. Lead synthesizes answers 7. Builder-reviewer validates 8. Append-only logging throughout

5. Agent Roles

Lead Agent: - Coordinates question assignment - Synthesizes research answers - Makes final architecture decisions - Ensures protocol compliance - Maintains claim board

Researcher Agent: - Focuses on information architecture - Answers knowledge system questions - Researches framework options - Documents findings

Builder-Reviewer Agent: - Focuses on implementation constraints - Answers toolchain questions - Reviews architecture decisions - Validates implementation

Configuration Files

1. teams.yaml

Location: /workcell/config/teams.yaml

Purpose: Define team structure, agent roles, and constraints

Staging: Create based on recommended configuration above

2. Agent Prompts

Location: /workcell/config/agent-prompts/

Files: - d3tui-lead-prompt.md - d3tui-researcher-prompt.md - d3tui-builder-prompt.md

Content: Role-specific instructions and constraints

3. Research Queue

Location: /workcell/llm-wiki/wiki/research/research-queue.md

Purpose: Track question status and assignments

Maintenance: Update as questions are claimed/answered

4. Claim Board

Location: /workcell/llm-wiki/wiki/tasks/claim-board.md

Purpose: Coordinate task assignment

Format: Structured claim entries with status

Implementation Roadmap

Phase 1: Foundation (Immediate)

Complete: - ✅ Q-05 Knowledge Depot (ANSWERED) - ✅ Q-00 Architecture Synthesis (ANSWERED)

Next Steps: 1. Stage teams.yaml configuration 2. Create agent prompt files 3. Document T0-T7 lifecycle 4. Test coordination workflow

Phase 2: Framework Setup

Dependencies: Q-01, Q-03 answered

Actions: 1. Finalize Pi Teams configuration 2. Install Bun and Pi Teams 3. Configure tmux panes 4. Test agent coordination

Phase 3: Knowledge System

Dependencies: None (Q-05 already answered)

Actions: 1. Implement LLM-wiki structure 2. Create research queue template 3. Add optional SQLite if needed 4. Document access patterns

Phase 4: Workflow Integration

Dependencies: Q-04, Q-08 answered

Actions: 1. Define T0-T7 artifact requirements 2. Create Forgejo issue templates 3. Document coordination process 4. Test end-to-end workflow

Phase 5: Enhancements

Optional: 1. Add SQLite indexing 2. Implement remote UI 3. Add validation smoke tests 4. Enhance observability

Validation Contract

What Agents Should Do: - ✅ Follow T0-T7 lifecycle - ✅ Produce required artifacts - ✅ Update claim board - ✅ Write to append-only log - ✅ Respect Raspberry Pi constraints

What Agents Should Avoid: - ❌ Complex frameworks (LangGraph, CrewAI) - ❌ Python dependencies - ❌ Multiple containers - ❌ Heavy RAG systems - ❌ Physical hardware assumptions

Success Criteria: - ✅ Three agents coordinating effectively - ✅ File-based knowledge system working - ✅ T0-T7 lifecycle followed - ✅ Single container operational - ✅ D3-TUI work progressing - ✅ Traceable work products - ✅ Raspberry Pi compatible

Risk Assessment

High Risks (Mitigated)

Framework Complexity: - ❌ LangGraph/CrewAI would add significant overhead - ✅ Mitigation: Use simple Pi Teams

Raspberry Pi Constraints: - ❌ Memory/CPU limitations - ✅ Mitigation: Lightweight stack, avoid Python

Knowledge System Scaling: - ⚠️ File-based search may not scale beyond 50 answers - ✅ Mitigation: Optional SQLite available, good organization

Medium Risks (Managed)

Single Container: - ⚠️ No isolation between agents - ✅ Mitigation: Simple operational model, clear protocols

Task Coordination: - ⚠️ Manual claim process - ✅ Mitigation: Clear protocol, claim board, append-only logging

File-Based Knowledge: - ⚠️ Manual search for many answers - ✅ Mitigation: Good organization, INDEX.md, optional SQLite

Low Risks (Acceptable)

tmux UI: Works well for 3 agents, simple to manage

Markdown Format: Human and agent readable, easy to edit

Bind Mounts: Clear separation, good performance

Bun Runtime: Lightweight, fast, Raspberry Pi compatible

Decision Points for Mehdi

None required for MVP. The recommended architecture: 1. Uses existing working components (Pi Teams, Docker, LLM-wiki) 2. Respects Raspberry Pi constraints 3. Avoids unnecessary complexity 4. Provides clear upgrade path if needed 5. Enables immediate progress on D3-TUI work

Next Probe

Immediate Next Steps: 1. Stage teams.yaml configuration 2. Create agent prompt files 3. Document T0-T7 lifecycle in detail 4. Test coordination workflow with current setup 5. Answer critical path questions (Q-01, Q-04)

Smallest Follow-Up: Implement the architecture as documented, measure agent coordination effectiveness, and add enhancements only if demonstrated need arises.

Detailed Analysis

Why Pi Teams Over Alternatives

Pi Teams Strengths: - ✅ Already Working: Proven in current setup - ✅ Lightweight: Bun-based, ~5MB install, ~50MB runtime - ✅ Simple: YAML configuration, clear concepts - ✅ Compatible: Works on Raspberry Pi with Bun - ✅ Feature Complete: Claim board, messaging, lifecycle, approvals - ✅ No Python: Avoids dependency and memory issues

LangGraph Weaknesses: - ❌ Complex: Graph definitions, state management, checkpointers - ❌ Heavy: Multiple dependencies, high memory usage - ❌ Overkill: Designed for complex orchestration (not needed) - ❌ Debugging: Hard to debug on remote Raspberry Pi - ❌ Setup: Requires thorough pre-staging

CrewAI Weaknesses: - ❌ Python: Adds Python runtime dependency - ❌ Memory: Higher memory usage - ❌ Overlap: Provides similar features to Pi Teams - ❌ Complexity: More moving parts

Why Single Container

Current Setup: - ✅ Working: Already operational - ✅ Simple: One container to manage - ✅ Efficient: Low overhead - ✅ Clear: Simple bind mount structure - ✅ Compatible: Works with Raspberry Pi resources

Multiple Containers Issues: - ❌ Not Feasible: Docker not available inside container - ❌ Complex: Inter-container communication needed - ❌ Overhead: Higher resource usage - ❌ Debugging: More complex to debug - ❌ Setup: More complicated setup

Why File-Based Knowledge

Current System: - ✅ Working: Agents can read/write directly - ✅ Simple: No database to manage - ✅ Compatible: Works with grep/find - ✅ Versioned: Git built-in - ✅ Portable: Easy to backup/restore - ✅ Human-readable: Markdown format

Database Issues: - ❌ Complexity: Schema management, backups - ❌ Dependencies: Database runtime needed - ❌ Compatibility: Agents need SQL knowledge - ❌ Overkill: Not needed for current scale

Why T0-T7 Lifecycle

Proven Process: - ✅ Clear Stages: Well-defined progression - ✅ Artifact Requirements: Mandatory outputs at each stage - ✅ Decision Points: Explicit go/no-go gates - ✅ Traceability: Append-only logging - ✅ Flexibility: Can adapt to different task types

Alternative Issues: - ❌ Ad-hoc: No structure, inconsistent outputs - ❌ Too Rigid: Overly prescriptive process - ❌ No Artifacts: Hard to track progress - ❌ No Decisions: Unclear when to stop/continue

Comparison with Alternatives

Aspect Recommended LangGraph CrewAI AutoGen
Framework Pi Teams LangGraph CrewAI AutoGen
Language Bun/JS Python Python Python
Complexity Low High Medium High
Memory ~50MB ~500MB ~300MB ~400MB
Setup Simple Complex Medium Complex
RPi Compatible ✅ Yes ❌ No ⚠️ Maybe ❌ No
Coordination ✅ Built-in ✅ Graph ✅ Built-in ✅ Built-in
Learning Curve Low High Medium High
Debugging Easy Hard Medium Hard
Dependencies Minimal Many Several Many
Fit for 3 Agents ✅ Perfect ❌ Overkill ⚠️ OK ❌ Overkill
Aspect Single Container Multiple Containers pi-container-sandbox
Feasibility ✅ Yes ❌ No (no docker) ❌ No (doesn't exist)
Complexity Low High Unknown
Isolation ❌ None ✅ Good Unknown
Overhead Low High Unknown
Debugging Easy Hard Unknown
Setup Simple Complex Unknown
RPi Fit ✅ Good ❌ Poor Unknown
Aspect LLM-wiki Files SQLite LlamaIndex
Complexity Low Medium High
Setup None Simple Complex
Dependencies None SQLite Many
Search grep/find SQL Vector
Scale ~50 answers ~1000s ~10000s
Memory Negligible ~1MB ~50MB+
RPi Fit ✅ Perfect ✅ Good ❌ Poor
Agent Access Direct SQL API
Current Need ✅ Sufficient ⚠️ Optional ❌ Overkill

Conclusion

Final Recommendation:

Framework: Pi Teams (Bun-based)
Container: Single Docker container
Knowledge: LLM-wiki file-based Markdown
Workflow: T0-T7 structured lifecycle
Coordination: Claim board + append-only logging
Avoid: LangGraph, CrewAI, AutoGen, complex RAG, multiple containers

This architecture provides the minimal viable foundation for the Pi workcell that: - ✅ Respects Raspberry Pi constraints (memory, CPU, storage) - ✅ Uses proven working components (already operational) - ✅ Enables effective coordination among 3 agents - ✅ Supports external contributor participation - ✅ Avoids unnecessary complexity and dependencies - ✅ Provides clear path for incremental enhancements - ✅ Enables immediate progress on D3-TUI work

The system is already partially working and can be fully implemented with minimal additional setup. Enhancements like SQLite indexing, remote UI, and advanced validation can be added later if demonstrated need arises, but the core architecture is sufficient for MVP and likely for the foreseeable future.

No complex frameworks, no Python dependencies, no multiple containers, no heavy RAG systems needed. Keep it simple, keep it working, keep it maintainable on a Raspberry Pi.