Guildhall Integration
How Lugh runs on the Guildhall infrastructure. For Guildhall’s overall architecture, see the Guildhall vault at /content/Guildhall/Guildhall.md.
Runtime: Spacebot on Blackthorn
Lugh runs on the same Spacebot instance that powers Quorum, on Blackthorn (Dual Xeon, 768GB RAM, 2× RTX 2080). See /content/Guildhall/Blackthorn.md and /content/Guildhall/Spacebot.md for hardware and runtime details.
Pipeline stages as Spacebot processes
Lugh’s stages map naturally to two Spacebot process types: Workers for production tasks and Channels for conversational interactions.
| Lugh Stage | Spacebot Process | Notes |
|---|---|---|
| Stage 0a — Topic discovery | Worker | Breadth-first research. Shells out to web search, reads sources, produces topic map. |
| Stage 0b — Curriculum design | Worker | Takes topic map + depth selection, produces syllabus. |
| Stage 1 — Pre-assessment | Channel | Conversational Feynman diagnostic with the learner. Back-and-forth, adaptive. |
| Stage 2 — Episode research | Worker | Depth-first per-episode. Only spawns after the gate passes the previous episode. |
| Stage 3/3b/3c/3d — Script + review + self-check + rubric | Worker chain | Four sequential Workers. Each takes the previous Worker’s output. |
| Stage 4 — TTS | Worker | Calls Voicebox REST API. Not an LLM task. See below. |
| Stage 5 — Listening | No process | Learner listens. Spacebot waits. Listener questions can arrive via the Channel. |
| Stage 6 — Feynman tutor | Channel | Conversational assessment. Uses rubric from 3d. Adapts to the learner. |
| Stage 7 — The Gate | Cortex decision | Evaluates assessment results, decides: advance / deep dive / complete. Triggers next Worker or closes the course. |
The two Channel stages (pre-assessment and Feynman tutor) are what Spacebot Channels are designed for — persistent conversational agents with memory. The production stages are Workers — task-specific processes that run to completion.
The gate as a Spacebot primitive
The gate is Lugh’s core innovation: Episode 2 only generates after Episode 1’s assessment passes. This is conditional Worker spawning — “don’t start the next Worker until this Channel produces a verdict.”
Spacebot’s Worker state machine (Running → WaitingForInput → Done → Failed) supports this pattern, but the conditional pipeline logic — gating one Worker’s spawn on another Channel’s outcome — needs to be added as a workflow primitive. This is Lugh’s main contribution back to the Spacebot fork.
See /content/Guildhall/Spacebot.md for the full list of what we keep vs modify in the Spacebot fork.
Course as Spacebot agent
Each course is a separate Spacebot agent with its own:
- Workspace — course files, episode scripts, audio, learner artifacts
- Memory graph — learner state, assessment history, accumulated teaching patterns
- Identity files — course personality, Feynman tutor protocol, depth configuration
A new course (“Understanding Design Patterns”) is a new agent. Resuming a course is resuming the agent. The learner’s progress, gate verdicts, and listener questions persist in the agent’s memory graph.
Memory graph mapping
Spacebot’s typed memory system maps to Lugh’s needs:
| Memory type | Lugh usage |
|---|---|
| Fact | What the learner demonstrated they know (solid concepts from pre-assessment and gate) |
| Decision | Gate verdicts — advance / deep dive / complete per episode |
| Event | Each assessment session, each episode completion, each listener question submitted |
| Observation | Patterns the Cortex notices: “learner struggles with abstract concepts,” “learner excels with concrete examples,” “this episode’s self-check consistently flags Section 3” |
| Preference | Learner’s depth selection, stated interests, learning style signals from pre-assessment |
| Goal | Course completion, per-episode learning objectives |
| Todo | Pending listener questions to weave into future episodes |
The Akashic Records via memory graph
The Akashic Records concept — anonymized learner artifacts that accumulate and improve the system — maps to Spacebot’s memory graph with graph edges providing automatic cross-referencing:
- RelatedTo edges connect a learner’s explanation to the episode that taught it
- Updates edges track how explanations improve across remediation cycles
- CausedBy edges link gate verdicts to specific assessment moments
Opt-in sharing (the privacy model described in The Akashic Records) would mean allowing specific memory nodes to be exported from a course agent’s isolated graph into a shared corpus. This is a future concern — the graph structure supports it, but the sharing mechanism isn’t built yet.
Entry point: the Secretary
The learner texts “I want to learn about design patterns” to the Guildhall number. The Secretary (see /content/Guildhall/The Directory.md) recognizes this as a Lugh request, wakes Blackthorn, and either spawns a new course agent or resumes an existing one.
The learner never needs to know that Lugh exists, that Spacebot is running, or that Blackthorn just woke up. They just get a conversation.
Quorum as design reviewer
Quorum (see /content/Guildhall/Quorum.md) is not part of Lugh’s runtime. It reviews Lugh’s design decisions at high-stakes moments:
- Stage 0 curriculum review before committing to a syllabus
- Gate threshold calibration
- Course Zero design (“Understanding Critical Thinking”)
Model assignments
TBD — Lugh’s stages have different requirements than Quorum’s seats:
- Pre-assessment and Feynman tutor Channels need conversational depth and the ability to probe without lecturing
- Research Workers need factual grounding + RAG capability
- Script generation Workers need narrative voice and the ability to follow Episode Anatomy format
- Self-check and rubric Workers need analytical reasoning to evaluate their own output
These may share some models from the Guildhall model registry (/content/Guildhall/The Model Registry.md) but will likely need their own assignments optimized for pedagogical tasks rather than deliberative evaluation.
Voicebox for Stage 4 (TTS)
Stage 4 uses Voicebox — Jamie Pine’s open-source local voice synthesis studio (MIT licensed). Same developer as Spacebot and Spacedrive.
Source: https://github.com/jamiepine/voicebox
Why Voicebox fits Lugh:
- Local-first, runs on Blackthorn. No cloud TTS service, no per-character API costs, no data leaving the machine.
- REST API for integration. The Stage 4 Worker calls
POST /generatewith the script text and a voice profile ID, gets audio back. - 5 TTS engines (Qwen3-TTS, LuxTTS, Chatterbox Multilingual, Chatterbox Turbo, HumeAI TADA) — switchable per generation. Different episodes could use different engines.
- 23 languages. Relevant if Lugh ever generates courses in languages other than English.
- Voice cloning from a few seconds of audio. A consistent narrator voice across all episodes of a course.
- Multi-track timeline editor. Supports the dual-narrator format from Episode Anatomy — host voice and “caller” voice on separate tracks.
- Post-processing effects (pitch shift, reverb, compression). Could differentiate the call-in segments from the main narration.
- Expressive tags in Chatterbox Turbo:
[laugh],[sigh],[gasp]. Adds personality to the podcast format.
Integration pattern: The Stage 3d Worker produces a final script. The Stage 4 Worker reads the script, identifies narrator segments and call-in segments, sends each to Voicebox with the appropriate voice profile and engine, then stitches the audio files together (or uses Voicebox’s timeline editor for multi-track composition). Output is a single audio file per episode stored in the course agent’s workspace.
Open questions:
- Voice profile selection: one narrator per course, or vary by episode/segment?
- Chatterbox Turbo’s expressive tags: should the script generation stage (3) embed these, or should a separate Worker insert them before TTS?
- Audio quality vs speed tradeoff: Qwen3-TTS for quality, LuxTTS for speed during development/testing?