Natural Language Input
Phase 2 feature. Type dice instructions in plain English and the app parses them into Notation Spec syntax.
Examples
| Natural Language | Notation |
|---|---|
| ”roll 4d6 drop the lowest” | 4d6kh3 |
| ”2d20 keep highest plus 5” | 2d20kh+5 |
| ”roll percentile with a bonus die” | 2d%kl |
| ”8d6” | 8d6 (already valid notation, pass through) |
| “halfling d20 reroll ones” | d20r |
| ”4d6 reroll 1s and 2s forever” | 4(d6R[1..2]) |
Implementation
Primary Strategy: On-Device LLM
Replaces hand-coded rules parser with flexible, trainable LLM parsing. All paths are offline-compatible with local model caching.
Platform-Specific
| Platform | Technology | Model | Cache |
|---|---|---|---|
| Web | WebLLM or Transformers.js + WebGPU (WASM fallback for Safari) | 1-1.5B params quantized | Service worker cache |
| iOS (Native) | Apple Foundation Models framework (iOS 18+) | On-device model | Local |
| Android (Native) | MediaPipe Solutions or llama.cpp Kotlin bindings | 1-1.5B params quantized | Local |
Model downloads once on first use, then caches locally. No external API calls. Falls back gracefully if model download unavailable (web version shows text input without natural language).
Fallback: Rules-Based Parser
For early Phase 2 or constrained environments, a simpler rules-based parser handles ~20 most common phrasings. Works offline.
Keywords to recognize: “roll”, “drop”, “keep”, “highest”, “lowest”, “advantage”, “disadvantage”, “bonus die”, “penalty die”, “with modifier”, “plus”, “minus”, “crit”, “reroll”, etc.
Learning Loop
Combined with Human-Readable Explanations, creates a bidirectional learning path:
Natural language → Notation → Explanation
The user types in English, sees the notation it generates, and reads the explanation. Over time they learn to skip the natural language step and type notation directly.
See also: Human-Readable Explanations, Template Library, Notation Spec, Reroll Mechanic