Game-Translator
Austronesian Localization System

Translation Pipeline

A 8-stage neural pipeline purpose-built for Austronesian languages. Our foundation model is heavily pre-trained on trilingual corpora (Indonesian, Malay, Filipino) with domain-adaptive LoRA specialization. Achieving native-level fluency that generic machine translators can't match.

GPU-Accelerated Inference Custom Foundation Model 3 Austronesian Languages 8 Neural Stages

Why Our Model is Different

Unlike general-purpose translators that bolt on language support as an afterthought, our core model undergoes a three-phase training regime designed specifically for the Austronesian language family:

Phase 1. Trilingual Pre-training: The foundation Gemma model is continually pre-trained on a curated corpus spanning Indonesian, Malay, and Filipino. Building deep cross-lingual representations that understand the shared Austronesian morphology, code-switching patterns, and cultural context.

Phase 2. Register-Adaptive LoRA: Three specialized Low-Rank Adaptation layers are trained for distinct translation registers Formal (institutional, literary), Casual (colloquial, youth slang), and Neutral (game UI, system text). Each adapter is activated per-line based on contextual metadata, producing translations that match the intended tone precisely.

Phase 3. Game-Specific Fine-tuning: During the pipeline's clustering stage, semantic patterns unique to each game title are automatically identified and collected. This data is used to construct per-game LoRA adapters that capture title-specific terminology, character voice profiles, and narrative conventions, essentially teaching the model to "speak" each game's language.

Neural Ensemble

Four specialized models working in tandem through a unified semantic memory layer

Entity Recognition Engine

Zero-shot NER

Entity extraction system that identifies game-world elements, characters, locations, items, factions — without requiring per-title training data. Feeds into our protected-term registry and cross-lingual entity graph.

AnalysisValidationRepair

Semantic Embedding Core

Tri-pass Embeddings

Our in-house embedding engine performs 3-pass semantic analysis (classification → similarity → clustering) to map every text fragment into a high-dimensional narrative space. Powers archetype matching, emotional tone detection, and register classification.

AnalysisValidationRepair

Cross-lingual Alignment

Bitext Mining

High-dimensional cross-lingual retrieval model for semantic quality scoring. Validates translation fidelity across all three target languages simultaneously, providing word-level alignment confidence used by our tag restoration engine.

AnalysisValidationRepair

Trilingual Foundation LLM

LoRA-Adaptive Generation

Custom Gemma-based model continually pre-trained on Austronesian corpora with register-adaptive LoRA switching (Formal / Casual / Neutral). Per-game adapters are constructed from semantic clustering data, enabling title-specific voice and terminology matching.

TranslationCorrectionGlossary

Pipeline Architecture

Data flows top-to-bottom related stages run in parallel where possible

A
Source Ingestionstage-0
Adaptive parser layer with a modular plugin system supporting 30+ game formats.
  1. Format Normalization extracts translatable strings from native game formats (XML, JSON, binary, custom archives) into a unified intermediate representation with full structural metadata for lossless reconstruction.
  2. Markup Abstraction game-specific markup (color tags, control codes, variables) is abstracted into format-agnostic placeholders, preserving the original mapping for pixel-perfect restoration.
  3. Intelligent Filtering applies heuristic and NER-based analysis to identify untranslatable content (proper nouns, coded identifiers, engine constants), preventing unnecessary LLM calls and reducing noise.
  4. Character Profiling extracts speaker identity from dialogue metadata and initiates archetype generation via the Foundation LLM for voice consistency.
  5. Deduplication & Stratification eliminates redundant strings and stratifies content into optimally-sized translation units.
→ Structural Manifests → Markup Registry → Source Corpus
B
Deep Narrative Analysisstage-1
Multi-model intelligence layer that builds a comprehensive understanding of the source material before any translation begins.
  1. Narrative Profiling the Semantic Embedding Core performs tri-pass analysis (classification → similarity → clustering) mapping each text fragment against archetype centroids to determine emotional tone, register, and scene context.
  2. Knowledge Graph Construction the Entity Recognition Engine builds a relational graph of game-world entities, tracking character relationships, faction hierarchies, and location networks for context-aware translation.
  3. Terminology Mining zero-shot NER + semantic clustering identifies domain-specific vocabulary, which is then refined through LLM-assisted filtering (removing ~95% noise) to produce a curated protected-term dictionary.
  4. LoRA Data Collection cluster patterns and semantic signatures from this stage are automatically harvested to construct game-specific LoRA adapters for the Foundation LLM.
← Source Corpus → Narrative Metadata → Entity Graph ↔ Semantic Memory NER Engine Embedding Core Alignment Model
C
LoRA-Adaptive Translationstage-2
The core translation engine powered by our continually pre-trained Gemma model with dynamic LoRA adapter selection.
  1. Register-Based Routing each text unit is routed to the appropriate LoRA adapter (Formal / Casual / Neutral) based on narrative metadata from Stage B. Game-specific adapters are layered on top when available.
  2. Context-Augmented Generation retrieval-augmented prompting injects entity dictionaries, register-specific slang palettes, trigger phrases, and translation memory as few-shot examples for maximum consistency.
  3. Structured Trilingual Output generates all three language variants (ID, MS, TL) simultaneously with schema-enforced JSON output, ensuring parallel parity across languages.
  4. Real-time Integrity Checks inline validation catches tag corruption, hallucinated content, and length overflow before output is committed triggering automatic retry with failover routing.
  5. Adaptive Concurrency multi-worker pipeline with intelligent rate management and thermal throttling for sustained GPU throughput.
← Source Corpus ← Narrative Metadata ← Entity Graph + Slang Palette → Trilingual Translations Foundation LLM + LoRA
D
Neural Correctionstage-3
Multi-phase post-translation refinement engine.
  1. Anomaly Detection
    • Entity reversion: auto-corrects mistranslated proper nouns
    • Hallucination scoring: structural divergence analysis
    • Byte-level anomaly detection: overflow/underflow flagging
  2. LLM-Assisted Repair unfixable items (~5-10%) are re-processed through the Foundation LLM with corrective prompting.
  3. Differential Reporting generates before/after analysis for human review.
← Trilingual Translations → Correction Report Foundation LLM
E
Human Review Gatestage-4
Human-in-the-loop quality checkpoint.
  1. Review Queue flagged items are presented with contextual diffs and confidence scores for expert review.
  2. Selective Override operators can apply corrections with granular merge strategies (overwrite, fill-missing, conditional).
  3. Feedback Loop corrections feed back into translation memory and LoRA training data for continuous model improvement.
This is the only stage requiring human intervention — everything else is fully automated.
← Correction Report → Verified Translations Human Checkpoint
F
Semantic Validationstage-5
AI-driven quality assurance using cross-lingual semantic similarity.
  1. Embedding Coherence verifies that translated text maintains semantic alignment with source using cached high-dimensional embeddings.
  2. Adaptive Clustering unsupervised clustering with dimensionality reduction identifies semantic outliers and distributional anomalies. Cluster data is also harvested for LoRA adapter construction.
  3. Multi-factor Quality Score per-line composite score combining semantic fidelity, markup preservation, and length compliance.
↔ Semantic Memory ← Entity Graph → Quality Scores Embedding Core Alignment Model
G
Markup Restorationstage-6
The most computationally complex stage. A multi-tier restoration engine.
  1. 5-Tier Restoration Cascade:
    • Tier 0 Translation memory lookup (instant)
    • Tier 1 Fuzzy pattern matching
    • Tier 2 Cross-lingual word alignment matrix
    • Tier 3 Contextual anchor inference
    • Tier 4 Entity-based cross-lingual positional mapping
  2. Coherence Scoring sliding-window semantic analysis validates placement quality at every tier.
  3. Injection Sanitization detects and removes AI-hallucinated markup not present in the original source.
  4. Structural Parity Enforcement guarantees bit-perfect tag synchronization with source material.
↔ Semantic Memory ← Narrative Metadata ← Entity Graph → Restored Translations NER Engine Embedding Core Alignment Model
H
Build & Packagingstage-8
Final assembly stage that reconstructs game-native file formats from translated data.
  1. Markup Reconstruction abstract placeholders are restored to their original game-specific markup using the format registry from Stage A.
  2. Format Rebuilding structural manifests guide pixel-perfect reconstruction of native game files across 30+ supported formats.
  3. Profile-aware Validation final markup integrity check using tag profiles to catch any remaining discrepancies.
  4. Distribution Packaging automated build system generates installer-ready packages with per-language coverage analytics, lexical quality metrics, and auto-generated documentation.
← Structural Manifests ← Markup Registry ← Restored Translations → Game-ready Packages → Quality Reports

Pipeline Internals

Key data artifacts that flow between stages through the neural pipeline

Component Function Produced Consumed
Structural Manifests Lossless structural maps for native format reconstruction
Stage A
Stage H
Markup Registry Bidirectional mapping between abstract and native markup
Stage A
Stage BStage H
Narrative Metadata Per-line archetype, emotion, register, and scene classification
Stage B
Stage CStage GStage H
Entity Graph Relational knowledge graph of game-world entities
Stage B
Stage FStage G
Semantic Memory Shared high-dimensional embedding cache (fp16) across all neural models
Stage BStage FStage G
Stage BStage FStage G
Quality Scores Per-line composite quality metrics with cluster assignments
Stage F
Stage G
LoRA Training Data Automatically harvested semantic patterns for game-specific adapter construction
Stage BStage F
Stage C

Released Archive

Austronesian Showcase

Location
Image
Video