The parameter server architecture is prevalently used for distributed deep learning. Each worker machine in a parameter server system trains the complete model, which leads to a hefty amount of network data transfer between workers and servers. We empirically observe that the data transfer has a non-negligible impact on training time. To tackle the problem, we design a new distributed training system called Stanza. Stanza exploits the fact that in many models such as convolution neural networks, most data exchange is attributed to the fully connected layers, while most computation is carried out in convolutional layers. Thus, we propose layer separation in distributed training: the majority of the nodes just train the convolutional layers, and the rest train the fully connected layers only. Gradients and parameters of the fully connected layers no longer need to be exchanged across the cluster, thereby substantially reducing the data transfer volume. We implement Stanza on PyTorch and evaluate its performance on Azure and EC2. Results show that Stanza accelerates training significantly over current parameter server systems: on EC2 instances with Tesla V100 GPU and 10Gb bandwidth for example, Stanza is 1.34x--13.9x faster for common deep learning models.
翻译:参数服务器架构被广泛用于分布式深层学习。 参数服务器系统中的每个工人机器都对完整的模型进行训练, 从而导致工人和服务器之间大量网络数据传输。 我们从经验中观察到, 数据传输对培训时间的影响是不可忽略的。 为了解决这个问题, 我们设计了一个新的分布式培训系统, 名为 Stanza。 Stanza 利用了一个事实, 在许多模型中, 如 convolution 神经网络, 大多数数据交换都归因于完全相连的层, 而大部分数据交换是在连锁层中进行。 因此, 我们提议在分布式培训中进行层分离: 大多数节点只是培养进化层, 而其余部分只培训完全连接的层。 完全连接层的梯度和参数不再需要在集群中互换, 从而大幅降低数据传输量。 我们在PyTorrch 上实施 Stanza, 并评估其在 Azure 和 EC2 上的性能表现。 结果显示, Stanza 大大加快了对当前参数服务器系统的培训: 在EC2 中, Tesla V100 GPU 和 10Gb 带宽, 例如, Stanza 将 更快地学习 。