Giant language fashions excel with reinforcement studying (RL), however absolutely unlocking this potential requires a mid-training stage. An efficient mid-training part ought to establish a compact set of helpful actions and allow quick choice amongst them via on-line RL. We formalize this instinct by presenting the primary theoretical consequence on how mid-training shapes post-training: it characterizes an motion subspace that minimizes each the worth approximation error from pruning and the RL error throughout subsequent planning. Our evaluation reveals two key determinants of mid-training effectiveness: pruning effectivity, which shapes the prior of the preliminary RL coverage, and its affect on RL convergence, which governs the extent to which that coverage will be improved through on-line interactions. These outcomes counsel that mid-training is only when the choice house is compact and the efficient horizon is brief, highlighting the significance of working within the house of motion abstractions moderately than primitive actions. Constructing on these insights, we suggest Reasoning as Motion Abstractions (RA3), a scalable mid-training algorithm. Particularly, we derive a sequential variational decrease certain and optimize it by iteratively discovering temporally-consistent latent buildings through RL, adopted by fine-tuning on the bootstrapped knowledge. Experiments on code technology duties display the effectiveness of our strategy. Throughout a number of base fashions, RA3 improves the common efficiency on HumanEval and MBPP by 8 and 4 factors over the bottom mannequin and the next-token prediction baseline. Moreover, RA3 achieves sooner convergence and better asymptotic efficiency in RLVR on HumanEval+, MBPP+, LiveCodeBench, and Codeforces.
- †Northwestern College
- ‡ College of Illinois Urbana–Champaign (UIUC)
- ** Work accomplished whereas at Apple
