Diffusion massive language fashions (dLLMs) are compelling options to autoregressive (AR) fashions as a result of their denoising fashions function over the whole sequence. The worldwide planning and iterative refinement options of dLLMs are significantly helpful for code era. Nonetheless, present coaching and inference mechanisms for dLLMs in coding are nonetheless under-explored. To demystify the decoding conduct of dLLMs and unlock their potential for coding, we systematically examine their denoising processes and reinforcement studying (RL) strategies. We practice a 7B dLLM, textbf{DiffuCoder}, on 130B tokens of code. Utilizing this mannequin as a testbed, we analyze its decoding conduct, revealing the way it differs from that of AR fashions: (1) dLLMs can resolve how causal their era must be with out counting on semi-AR decoding, and (2) growing the sampling temperature diversifies not solely token decisions but additionally their era order. This variety creates a wealthy search house for RL rollouts. For RL coaching, to cut back the variance of token log-likelihood estimates and preserve coaching effectivity, we suggest textbf{coupled-GRPO}, a novel sampling scheme that constructs complementary masks noise for completions utilized in coaching. In our experiments, coupled-GRPO considerably improves DiffuCoder’s efficiency on code era benchmarks (+4.4% on EvalPlus) and reduces reliance on AR bias throughout decoding. Our work gives deeper perception into the equipment of dLLM era and provides an efficient, diffusion-native RL coaching framework.
- †The College of Hong Kong (HKU)
- ** Work performed whereas at Apple
