Random reshuffling, which randomly permutes the dataset each epoch, is widely adopted in model training because it yields faster convergence than with-replacement sampling. Recent studies indicate greedily chosen data orderings can further speed up convergence empirically, at the cost of using more computation and memory. However, greedy ordering lacks theoretical justification and has limited utility due to its non-trivial memory and computation overhead. In this paper, we first formulate an example-ordering framework named herding and answer affirmatively that SGD with herding converges at the rate $O(T^{-2/3})$ on smooth, non-convex objectives, faster than the $O(n^{1/3}T^{-2/3})$ obtained by random reshuffling, where $n$ denotes the number of data points and $T$ denotes the total number of iterations. To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension. We show empirically on applications including MNIST, CIFAR10, WikiText and GLUE that GraB can outperform random reshuffling in terms of both training and validation performance, and even outperform state-of-the-art greedy ordering while reducing memory usage over $100\times$.
翻译:随机重整随机地将每个时代的数据集混合在一起,在模型培训中被广泛采用,因为它比更替的抽样抽样更能产生更快的趋同率。最近的研究显示,贪婪选择的数据定购可以以更多的计算和记忆成本,以经验方式进一步加速趋同。然而,贪婪的定购缺乏理论依据,并且由于它的非边际内存和计算间接费用,其效用有限。在本文中,我们首先设计了一个名为“放牧”的例序式框架,并肯定地回答,SGD和SGD在顺畅、非互换目标上以美元为单位,比通过随机重整获得的美元(n_1/3}T ⁇ 2/3})更快。在随机重整中获得的美元中,贪婪定的定购单价缺乏数据点数和计算总成本。为了减少记忆管理费,我们利用差异最小化的理论来提议一种与缓冲的在线快速调控算法(GRAB),同时将记忆使用率从$(n_美元)降低成本的记忆使用率,在O值中,在O值上显示业绩,在O-ral-deal-deal-deal-deal-al-lexxxxxy 上,同时显示,在Oxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx。