ICML 是 International Conference on Machine Learning的缩写，即国际机器学习大会。ICML如今已发展为由国际机器学习学会（IMLS）主办的年度机器学习国际顶级会议。

### VIP内容

1、联邦学习，Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability（分布式非凸优化与次线性加速下的间歇客户端可用性)

2、自监督学习，Improving Molecular Design by Stochastic Iterative Target Augmentation（通过随机迭代目标增强改进分子设计）

UC Berkeley ，MIT ，剑桥 University of Cambridge

https://arxiv.org/abs/2002.04720

3、图神经网络，Graph Prolongation Convolutional Networks: Explicitly Multiscale Machine Learning on Graphs, with Applications to Modeling of Biological Systems（图延长卷积网络:在图上显式多尺度机器学习，用于生物系统建模）

University of California Irvine, Irvine, California, USA.

https://arxiv.org/abs/2002.05842

4、强化学习，Reinforcement Learning Enhanced Quantum-inspired Algorithm for Combinatorial Optimization（强化学习增强的量子启发组合优化算法）

Russian Quantum Center, Moscow, University of Oxford

https://www.zhuanzhi.ai/paper/fb7eef4291247609ac9407e3211cbe1d

5、语言模型，Aligning the Pretraining and Finetuning Objectives of Language Models（对齐语言模型的预训练和微调目标）

https://arxiv.org/abs/2002.02000

6、终身学习，Lipschitz Lifelong Reinforcement Learning（Lipschitz终生强化学习）

ISAE-SUPAERO, Universite de Toulouse, France

https://arxiv.org/abs/2001.05411

7、图学习，Deep Coordination Graphs（深度协调图）

University of Oxford

https://arxiv.org/abs/1910.00091

8、数据隐私权，Certified Data Removal from Machine Learning Models（从机器学习模型中移除经过认证的数据）

Cornell University, New York, USA Facebook AI Research

https://arxiv.org/abs/1911.03030

### 最新论文

This paper shows how to adapt several simple and classical sampling-based algorithms for the $k$-means problem to the setting with outliers. Recently, Bhaskara et al. (NeurIPS 2019) showed how to adapt the classical $k$-means++ algorithm to the setting with outliers. However, their algorithm needs to output $O(\log (k) \cdot z)$ outliers, where $z$ is the number of true outliers, to match the $O(\log k)$-approximation guarantee of $k$-means++. In this paper, we build on their ideas and show how to adapt several sequential and distributed $k$-means algorithms to the setting with outliers, but with substantially stronger theoretical guarantees: our algorithms output $(1+\varepsilon)z$ outliers while achieving an $O(1 / \varepsilon)$-approximation to the objective function. In the sequential world, we achieve this by adapting a recent algorithm of Lattanzi and Sohler (ICML 2019). In the distributed setting, we adapt a simple algorithm of Guha et al. (IEEE Trans. Know. and Data Engineering 2003) and the popular $k$-means$\|$ of Bahmani et al. (PVLDB 2012). A theoretical application of our techniques is an algorithm with running time $\tilde{O}(nk^2/z)$ that achieves an $O(1)$-approximation to the objective function while outputting $O(z)$ outliers, assuming $k \ll z \ll n$. This is complemented with a matching lower bound of $\Omega(nk^2/z)$ for this problem in the oracle model.

Top