Pipeline architecture for wikitext-to-structured-data extraction. Existence public; specifics internal until coordinated with Memory Alpha admins.
Why this matters
Memory Alpha is the launch Franchise (Entry 009). Migration via structured extraction (no wikitext support). Pipeline architecture needs design: AI-assisted parsing with human review? Confidence scoring? Migration order (entities first, then narrative pages)? Reusability for future Franchises that follow.
Sub-questions (public)
- Wikitext-to-structured-data extraction toolchain
- Attribution preservation (CC-BY-SA compliance)
- Coordinated vs organic admin/contributor migration
- Data deduplication when Memory Alpha entities overlap with future Franchises
- Authoring tools available the day after migration (G-015)
- Legal posture for the migration itself (G-002)