Q-00 Architecture Synthesis: T1 Research
Agent: lead
Timestamp: 2026-05-11T02:05:00Z
Stage: T1 Research
Status: IN PROGRESS
Deep Analysis of Q-05 Answer
Knowledge Depot Decision Impact
Core Decision: LLM-wiki file-based Markdown system for MVP
Architecture Implications:
- Storage Layer:
- β File-based (ext4 mounts) works well
- β No database dependency
- β Git version control built-in
-
β Simple backup/restore
-
Access Pattern:
- β Direct file I/O for agents
- β grep/find for searching
- β No complex API layer needed
-
β Human-readable format
-
Scalability:
- β Sufficient for 13 research questions
- β Can handle 50+ answers easily
- β SQLite enhancement available if needed
-
β No immediate scaling concerns
-
Raspberry Pi Fit:
- β Minimal memory overhead
- β No ARM compatibility issues
- β Works with slow microSD I/O
- β Low power consumption
Framework Selection Constraints
From Q-05 Analysis: - β LlamaIndex: Too heavy for Raspberry Pi - β Complex RAG: Overkill for current needs - β Simple file I/O: Proven to work - β Optional SQLite: Lightweight enhancement
Architecture Constraint: Framework must work with file-based knowledge depot
Question Dependency Analysis
Critical Path Questions
Q-04: Work Shape / Lifecycle - Why Critical: Defines T0-T7 process that all other questions depend on - Dependencies: None - Blocks: Q-08 (Forgejo workflow), Q-12 (external coordination) - Priority: HIGH (foundational)
Q-01: Pi Teams Fit - Why Critical: Defines team structure and coordination mechanism - Dependencies: None - Blocks: Q-03 (framework comparison), Q-02 (LangGraph) - Priority: HIGH (foundational)
Q-03: Framework Comparison - Why Critical: Framework selection affects all implementation decisions - Dependencies: Q-01 (understand Pi teams first) - Blocks: Q-06 (container shape), Q-09 (runtime setup) - Priority: HIGH (architectural decision)
Implementation Questions
Q-06: Runtime Container Shape - Dependencies: Q-03 (framework decision) - Blocks: Q-09 (Bun/Pi install) - Priority: MEDIUM (implementation detail)
Q-09: Bun / Pi Install / Model Routing - Dependencies: Q-03 (framework), Q-06 (container) - Priority: MEDIUM (setup requirement)
Q-07: Toolchain / KOS Contract - Dependencies: None (D3-TUI specific) - Priority: MEDIUM (domain-specific)
Workflow Questions
Q-08: Forgejo Workflow - Dependencies: Q-04 (work lifecycle) - Priority: MEDIUM (process integration)
Q-10: Validation / Smoke Testing - Dependencies: Q-07 (toolchain) - Priority: MEDIUM (quality assurance)
Enhancement Questions
Q-11: Remote UI / Observability - Dependencies: None - Priority: LOW (nice-to-have)
Q-02: LangGraph Fit - Dependencies: Q-01, Q-03 - Priority: LOW (alternative approach)
Architecture Decision Tree
Q-04 Work Lifecycle
β
Q-01 Pi Teams Fit
β
Q-03 Framework Decision
βββ pi-teams path β Q-06 Container (single)
βββ LangGraph path β Q-06 Container (graph)
βββ other framework β Q-06 Container (custom)
β
Q-09 Runtime Setup
β
Q-07 Toolchain (parallel)
β
Q-08 Forgejo Workflow
β
Q-10 Validation
β
Q-11 UI (optional)
Framework Comparison Research
Pi Teams Analysis
Current Usage: - β Working in current setup - β Team coordination via claim board - β Task lifecycle management - β Messaging between agents - β Plan approval workflow
Strengths: - β Lightweight (Bun-based) - β Simple configuration (teams.yaml) - β tmux integration for UI - β Built-in task management - β Plan approval system - β Low resource usage
Weaknesses: - β οΈ Limited orchestration complexity - β οΈ Manual task assignment - β οΈ No built-in RAG/knowledge - β οΈ Basic error handling
Raspberry Pi Fit: - β Excellent (Bun is lightweight) - β Low memory footprint - β Fast startup - β ARM compatible
LangGraph Analysis
Potential Benefits: - β Graph-based orchestration - β State management - β Retry mechanisms - β Complex workflow support - β Checkpointing
Costs: - β Heavy dependencies - β Complex setup - β Steep learning curve - β High memory usage - β Debugging complexity
Raspberry Pi Fit: - β Poor (memory constraints) - β Slow performance - β Thermal throttling risk - β Overkill for current needs
Recommendation: β Avoid for MVP
CrewAI Analysis
Potential Benefits: - β Agent collaboration - β Task delegation - β Process management - β Memory sharing
Costs: - β Python dependency - β Complex configuration - β Memory intensive - β Overlap with pi-teams
Raspberry Pi Fit: - β Python runtime needed - β Memory constraints - β Not better than pi-teams
Recommendation: β Avoid for MVP
AutoGen / Microsoft Agent Framework
Potential Benefits: - β Multi-agent conversation - β Tool use - β Workflow automation - β Enterprise-grade
Costs: - β Complex setup - β Python dependency - β Overkill for 3 agents - β Debugging difficulty
Raspberry Pi Fit: - β Memory constraints - β Performance issues - β Not needed
Recommendation: β Avoid for MVP
Pydantic AI Analysis
Potential Benefits: - β Typed outputs - β Validation - β Structured data - β Integration potential
Costs: - β Python dependency - β Limited orchestration - β Not a full framework - β Overlap with existing validation
Raspberry Pi Fit: - β Python needed - β Limited value add - β Not worth complexity
Recommendation: β Avoid for MVP
Preliminary Framework Recommendation
Decision: β Pi Teams for MVP
Rationale: 1. β Already working in current setup 2. β Lightweight and Raspberry Pi compatible 3. β Simple configuration and operation 4. β Built-in team coordination features 5. β Low resource usage 6. β Avoids Python dependency 7. β Matches current workflow
Alternative Path: LangGraph could be reconsidered if: - Agent count grows beyond 5 - Complex orchestration needed - State management becomes problematic - Checkpointing required
Not Recommended: CrewAI, AutoGen, Pydantic AI (overkill, Python dependency, or not better than pi-teams)
Container Strategy Analysis
Current Setup Analysis
Observed Configuration: - β Single Docker container - β tmux for pane management - β Bind mounts for workcell/repo - β No docker-in-docker available - β pi-container-sandbox doesn't exist
Strengths: - β Simple operational model - β Clear mount structure - β Easy to manage - β Low overhead - β Works with current tools
Weaknesses: - β οΈ No isolation between agents - β οΈ Single point of failure - β οΈ Limited resource controls
Multiple Container Analysis
Potential Benefits: - β Agent isolation - β Separate resource limits - β Independent failure domains - β Cleaner security boundaries
Costs: - β Docker not available inside container - β Complex setup - β Inter-container communication needed - β Resource overhead - β Debugging complexity
Feasibility: β Not possible without docker-in-docker
Recommendation: β Avoid for MVP (not feasible)
pi-container-sandbox Analysis
Finding: Does not exist as a real tool
Evidence: - β Not found in pi-tui or pi-coding-agent - β Not in npm registry - β Only mentioned in research questions - β Appears to be hypothetical
Recommendation: β Ignore (hypothetical, doesn't exist)
Final Container Recommendation
Decision: β Single Docker Container with current setup
Rationale: 1. β Already working 2. β Docker not available for alternatives 3. β Simple operational model 4. β Clear bind mount structure 5. β Low overhead 6. β Matches Raspberry Pi constraints
Enhancement Path: - Future: Consider multiple containers if docker-in-docker becomes available - Future: Add resource limits if needed - Future: Explore isolation mechanisms if required
Work Lifecycle Analysis
Proposed T0-T7 Lifecycle
T0: Intake - Check claim board and log - Inspect repo state - Identify safest first issue - Draft task clock - Write to wiki
T1: Review/Research - Research exact paths and conventions - Review tool error handling - Identify documentation gaps - Verify current build state - Analyze gaps vs target
T2: Plan - Draft improved documentation - Design better error messages - Plan manifest improvements - Create comprehensive design - Define implementation plan
T3: Chunk/Dispatch - Break work into chunks - Assign to appropriate agents - Create sub-tasks if needed - Dispatch to team members
T4: Implement - Update extraction tool - Add dry-run mode - Improve discovery output - Update manifest placeholders
T5: Validate - Test extraction tool - Verify error messages - Test dry-run mode - Ensure existing build works
T6: Review/Publish - Peer review changes - Reviewer validation - Builder-reviewer signoff - Merge to main
T7: Reflect/Close - Update claim board - Write summary - Identify next task - Close issue
Artifact Requirements
Mandatory Artifacts: - Task clock with stages - Research findings document - Design document - Implementation patches - Test results - Review notes
Optional Artifacts: - Architecture diagrams - Sequence diagrams - Performance metrics - User documentation
Preliminary Architecture Recommendation
Minimal Viable Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Docker Container β
β β
β βββββββββββ βββββββββββ βββββββββββββββββ β
β β Lead β βResearcherβ βBuilder-Reviewerβ β
β βββββββββββ βββββββββββ βββββββββββββββββ β
β β β β β
β ββββββ¬ββββββ β β
β β β β
β βββββββββββββββββββ β β
β β Pi Teams Core ββββββββββββββ β
β β - Claim Board β β
β β - Task Lifecycleβ β
β β - Messaging β β
β β - Plan Approval β β
β βββββββββββββββββββ β
β β β
β βββββββββββββββββββ β
β β tmux Panes β β
β β - Lead Pane β β
β β - Researcher β β
β β - Builder Pane β β
β βββββββββββββββββββ β
β β β
β βββββββββββββββββββ β
β β LLM-Wiki βββββββββββββββββββββββββββββββ
β β - Research Q/A β /workcell/llm-wiki
β β - Task Logs β /workcell/runs
β β - Architecture β /workcell/config
β βββββββββββββββββββ β
β β β
β βββββββββββββββββββ β
β β D3-TUI Repo β β
β β - Source Code β /work/repo
β β - Documentation β
β β - Build System β
β βββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
β β β
βββββββ΄ββββββ βββββββ΄ββββββ βββββββ΄ββββββ
β ext4 mountβ β ext4 mountβ β ext4 mountβ
β /workcell/ β β /workcell/ β β /work/repo β
β llm-wiki β β runs β β β
βββββββββββββ βββββββββββββ βββββββββββββ
Component Breakdown
Container Runtime: - β Single Docker container - β Debian 13 (trixie) base - β Bun runtime - β Pi teams installed globally - β tmux for pane management
Bind Mounts:
- β
/workcell/llm-wiki β ext4 (rw) - Knowledge depot
- β
/workcell/runs β ext4 (rw) - Task runs
- β
/workcell/config β ext4 (rw) - Configuration
- β
/work/repo β ext4 (rw) - D3-TUI repository
Framework: - β Pi Teams (Bun-based) - β teams.yaml configuration - β Claim board coordination - β T0-T7 task lifecycle
Knowledge System: - β LLM-wiki file-based Markdown - β Research queue tracking - β Answer documents - β Append-only logging - β οΈ Optional SQLite indexing
Agent Roles: - β Lead: Coordination, synthesis, final decisions - β Researcher: Information architecture, knowledge systems - β Builder-Reviewer: Implementation, constraints, review
Workflow: - β Claim-based task assignment - β T0-T7 structured lifecycle - β File-based artifact production - β Append-only traceability
Technology Stack
Runtime: - β Docker (single container) - β Bun (JavaScript runtime) - β Pi Teams (coordination framework) - β tmux (UI panes)
Knowledge: - β Markdown files - β Git version control - β οΈ SQLite (optional enhancement) - β No LlamaIndex/RAG
Tooling: - β Bash, grep, find - β Git for version control - β Standard Unix tools - β No Python (not available)
Validation: - β File-based smoke tests - β Manual review process - β Append-only logging - β No headless Flycast (out of scope)
Risk Assessment
High Risks (Mitigated)
Framework Complexity: - β LangGraph/CrewAI would add significant complexity - β Mitigation: Use simple pi-teams
Raspberry Pi Constraints: - β Memory/CPU limitations - β Mitigation: Lightweight stack
Knowledge System Scaling: - β οΈ File-based search may not scale - β Mitigation: Optional SQLite available
Medium Risks (Managed)
Single Container: - β οΈ No isolation between agents - β Mitigation: Simple operational model
File-Based Knowledge: - β οΈ Manual search for many answers - β Mitigation: Good organization, optional SQLite
Task Coordination: - β οΈ Manual claim process - β Mitigation: Clear protocol, claim board
Low Risks (Acceptable)
tmux UI: - β Works well for 3 agents - β Simple to manage
Markdown Format: - β Human and agent readable - β Easy to edit
Bind Mounts: - β Clear separation - β Good performance
Implementation Roadmap
Phase 1: Foundation (Current)
Complete: - β Q-05 Knowledge Depot (ANSWERED) - β Q-00 T0 Intake (COMPLETE) - β Q-00 T1 Research (IN PROGRESS)
Next: - Q-04 Work Shape / Lifecycle (CRITICAL) - Q-01 Pi Teams Fit (CRITICAL) - Q-03 Framework Comparison (DECISION)
Phase 2: Framework Decision
After Q-03 Answered: - Finalize framework choice (pi-teams expected) - Document configuration - Stage teams.yaml - Test coordination
Phase 3: Implementation
Container Setup: - Document current single container approach - Define bind mount requirements - Specify operational controls - Test resource limits
Runtime Setup: - Bun installation procedure - Pi teams installation - Model routing configuration - Agent configuration
Phase 4: Workflow Integration
Forgejo Workflow: - Issue template for T0-T7 - Comment conventions - Branch strategy - Close process
Validation: - Smoke test definitions - Validation contract - Reporting format - Inconclusive handling
Phase 5: Enhancements
Observability: - Remote UI options - Health check files - Status logging - Quick health commands
External Coordination: - External agent guide - Claim process documentation - Answer format examples - Participation rules
Questions for Team
For Researcher
- Q-04 Priority: Should you claim Q-04 (Work Shape) next? It's on the critical path.
- Q-01 Alternative: If not Q-04, Q-01 (Pi Teams Fit) is also critical.
- Framework Insights: Any additional thoughts on framework selection?
- Knowledge System: Does Q-05 answer cover all knowledge depot needs?
For Builder-Reviewer
- Q-06 Progress: Any early findings on container shape?
- Constraints: Does single container approach meet your operational needs?
- Toolchain: Should Q-07 (KOS Contract) be prioritized?
- Validation: Thoughts on Q-10 (Validation) approach?
For External Agents
- Question Preference: Which OPEN questions are you best positioned to answer?
- Constraints: Any limitations on which questions you can claim?
- Coordination: How can we improve the claim process for external agents?
Next Steps
Immediate (Next 30 minutes)
- β Complete T1 research document
- Draft preliminary architecture answer
- Identify critical path questions
- Prepare team coordination message
Short-term (Today)
- Finalize Q-00 answer with recommendations
- Coordinate question assignments
- Begin Q-04 and Q-01 research
- Document framework decision
Medium-term (This Week)
- Answer all critical path questions
- Finalize architecture recommendation
- Stage configuration files
- Test coordination workflow
Success Criteria
MVP Success: - β Three agents coordinating effectively - β File-based knowledge system working - β T0-T7 lifecycle defined and followed - β Single container operational - β D3-TUI work progressing
Architecture Success: - β Minimal viable components only - β Raspberry Pi compatible - β Simple to operate and debug - β Traceable work products - β Scalable to 5+ agents if needed
Conclusion
Preliminary Recommendation: - β Framework: Pi Teams (already working, lightweight) - β Container: Single Docker container (current setup) - β Knowledge: LLM-wiki file-based (Q-05 decision) - β Workflow: T0-T7 lifecycle (needs Q-04 definition) - β Coordination: Claim board + tmux panes - β Avoid: LangGraph, CrewAI, complex RAG for MVP
This architecture provides a minimal viable foundation that respects Raspberry Pi constraints while enabling effective coordination among three Pi agents and external contributors. The system is already partially working and can be incrementally enhanced as needs arise.