Diffusion LLM Part 4: LLaDA 2.0 -> 2.1 -- Breaking 100B with MoE + Token Editing
MoE scaling, Token Editing (T2T+M2T), S-Mode/Q-Mode, RL Framework -- how LLaDA 2.X makes diffusion LLMs practical.

Diffusion LLM Part 4: LLaDA 2.0 -> 2.1 -- Breaking 100B with MoE + Token Editing
In Part 3, LLaDA proved that "Diffusion LLMs are viable" by scaling Masked Diffusion to the 8B parameter range. But practical challenges remained: inference speed was far behind AR models, and alignment training like RLHF was absent.
In November 2025, Ant Group's InclusionAI began closing this gap with LLaDA 2.0. Then in February 2026, LLaDA 2.1 redefined the speed-quality tradeoff with an innovation called Token Editing.
This post covers the scaling journey from 8B to 100B, the adoption of MoE architecture, and how Token Editing works under the hood.
LLaDA 2.0: The Leap to 100B
Related Posts

From Evaluation to Deployment — The Complete Fine-tuning Guide
Evaluate with Perplexity, KoBEST, ROUGE-L. Merge adapters with merge_and_unload(), convert to GGUF, deploy via vLLM/Ollama. Overfitting prevention, data quality, hyperparameter guide.

QLoRA + Custom Dataset — Fine-tune 7B on a Single T4 GPU
Fine-tune Qwen 2.5 7B on a T4 16GB using QLoRA (4-bit NormalFloat + LoRA). Korean dataset preparation guide, NF4/Double Quantization/Paged Optimizer explained, Wandb monitoring.

Mastering LoRA — Fine-tune a 7B Model on a Single Notebook
From LoRA theory to hands-on Qwen 2.5 7B fine-tuning. Train only 0.18% of parameters while achieving 98% of full fine-tuning performance. VRAM reduced from 130GB to 18GB.