分布式机器学习研究将具有大规模数据量和计算量的任务分布式地部署到多台机器上,其核心思想在于“分而治之”,有效提高了大规模数据计算的速度并节省了开销。

VIP内容

ML规模化常常被低估。在多台机器上训练一个ML模型(最初是针对单个CPU/GPU实现的)到底需要什么?一些痛点是: (1) 需要编写许多新代码行来将代码转换为分布式版本; (2)需要大量调整代码以满足系统/统计性能,这是模型开发的附加过程; (3)决定使用哪些/多少硬件资源来训练和部署模型; (4) 从组织的角度出发,在许多用户和作业之间实现资源共享自动化,以满足用户的需求,同时使资源利用率最大化,成本最小化。

在本教程中,我们将介绍自动化分布式ML基础设施的改进技术。本教程涵盖了对ML并行化至关重要的三个领域: (1)对并行ML构建块进行编组和标准化; (2) ML并行表示和软件框架; (3)自动ML并行化的算法和系统,以及在共享集群上ML作业的资源分配。通过揭示ML程序的独特特征,并通过剖析成功案例来揭示如何利用它们,我们为ML研究人员和实践者提供了进一步塑造和发展SysML领域的机会。

听众应该熟悉ML和DL的基础知识。了解TensorFlow、PyTorch和分布式ML技术也有帮助,但不是必需的。

https://sites.google.com/view/aaai-2021-tutorial-ah9/home

成为VIP会员查看完整内容
0
17

最新论文

Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning training without accessing raw data on clients or end devices. However, their \emph{comparative training performance} under real-world resource-restricted Internet of Things (IoT) device settings, e.g., Raspberry Pi, remains barely studied, which, to our knowledge, have not yet been evaluated and compared, rendering inconvenient reference for practitioners. This work firstly provides empirical comparisons of FL and SL in real-world IoT settings regarding (i) learning performance with heterogeneous data distributions and (ii) on-device execution overhead. Our analyses in this work demonstrate that the learning performance of SL is better than FL under an imbalanced data distribution but worse than FL under an extreme non-IID data distribution. Recently, FL and SL are combined to form splitfed learning (SFL) to leverage each of their benefits (e.g., parallel training of FL and lightweight on-device computation requirement of SL). This work then considers FL, SL, and SFL, and mount them on Raspberry Pi devices to evaluate their performance, including training time, communication overhead, power consumption, and memory usage. Besides evaluations, we apply two optimizations. Firstly, we generalize SFL by carefully examining the possibility of a hybrid type of model training at the server-side. The generalized SFL merges sequential (dependent) and parallel (independent) processes of model training and is thus beneficial for a system with large-scaled IoT devices, specifically at the server-side operations. Secondly, we propose pragmatic techniques to substantially reduce the communication overhead by up to four times for the SL and (generalized) SFL.

0
0
下载
预览
父主题
Top