Tuesday, January 20, 2026

Pretraining with Hierarchical Recollections: Separating Lengthy-Tail and Frequent Information


The spectacular efficiency positive aspects of contemporary language fashions presently depend on scaling parameters: bigger fashions retailer extra world data and purpose higher. But compressing all world data into parameters is pointless, as solely a fraction is used per immediate, and impractical for edge gadgets with restricted inference-time reminiscence and compute. We handle this shortcoming by a memory-augmented structure and a pretraining technique aligned with present {hardware} paradigms. We introduce small language fashions that entry giant hierarchical parametric reminiscence banks encoding world data. Throughout pretraining and inference, we fetch a small, context-dependent reminiscence block and add it to the mannequin. Our pretraining learns to retailer long-tail world data within the reminiscence parameters, whereas the small language mannequin acts as an anchor capturing widespread data and normal reasoning talents. Via trillion-token-scale experiments, we present important positive aspects: a 160M-parameters mannequin augmented with an 18M-parameters reminiscence fetched from a 4.6B reminiscence financial institution obtains comparable efficiency to an everyday mannequin with greater than 2x the parameters. Via intensive experiments, we examine the optimum sort and measurement of parametric reminiscences in transformers, scaling them to over 21B parameters. We discover that our proposed hierarchical feed-forward reminiscences work robustly throughout transformer architectures, whether or not added throughout pretraining or post-hoc.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles