Face recognition has achieved significant progress in deep-learning era due to the ultra-large-scale and well-labeled datasets. However, training on ultra-large-scale datasets is time-consuming and takes up a lot of hardware resource. Therefore, designing an efficient training approach is crucial and indispensable. The heavy computational and memory costs mainly result from the high dimensionality of the Fully-Connected (FC) layer. Specifically, the dimensionality is determined by the number of face identities, which can be million-level or even more. To this end, we propose a novel training approach for ultra-large-scale face datasets, termed Faster Face Classification (F$^2$C). In F$^2$C, we first define a Gallery Net and a Probe Net that are used to generate identities' centers and extract faces' features for face recognition, respectively. Gallery Net has the same structure as Probe Net and inherits the parameters from Probe Net with a moving average paradigm. After that, to reduce the training time and hardware costs of the FC layer, we propose a Dynamic Class Pool (DCP) that stores the features from Gallery Net and calculates the inner product (logits) with positive samples (whose identities are in the DCP) in each mini-batch. DCP can be regarded as a substitute for the FC layer but it is far smaller, thus greatly reducing the computational and memory costs. For negative samples (whose identities are not in DCP), we minimize the cosine similarities between negative samples and those in DCP. Then, to improve the update efficiency of DCP's parameters, we design a dual data-loader including identity-based and instance-based loaders to generate a certain of identities and samples in mini-batches.


翻译:由于超大型和标签良好的数据集,在深层学习时代的面部识别取得了显著进展。然而,超大型数据集培训耗时且占用了大量硬件资源。因此,设计高效培训方法至关重要且不可或缺。计算和记忆成本的沉重主要来自全结层的高维度。具体地说,维度取决于面部身份的数量,这些面部身份可以是百万个以上。为此,我们提议对超大型数据集(称为“快速面值分类 ” ) 进行新颖的培训方法。在F$2 C中,我们首先定义了一家美术网和Probe网络,分别用于生成身份中心和提取面部识别特征特征。图片网的结构与Probe Net相同,以移动的平均模式继承了Probe 网络的参数。此后,为了减少FC层的培训时间和硬件成本,我们提议对超大型面层的面部数据集进行新的培训方法培训,我们提议在FDaldal-C服务器上进行动态的类中,我们从一个内存的内存数据样本中将数据存储一个积极的数据,从Florimal-de daldeal dal dal dal dies del dealde dalde dies deal deal deal deal deal deal deal deald max max max max max max max max max max max max max max max max max max max max max max maxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

0
下载
关闭预览

相关内容

数字化健康白皮书,17页pdf
专知会员服务
104+阅读 · 2021年1月6日
【干货书】真实机器学习,264页pdf,Real-World Machine Learning
Stabilizing Transformers for Reinforcement Learning
专知会员服务
57+阅读 · 2019年10月17日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
已删除
将门创投
3+阅读 · 2018年6月20日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Andrew NG的新书《Machine Learning Yearning》
我爱机器学习
11+阅读 · 2016年12月7日
Arxiv
4+阅读 · 2019年11月21日
Arxiv
4+阅读 · 2018年4月26日
VIP会员
相关VIP内容
相关资讯
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
已删除
将门创投
3+阅读 · 2018年6月20日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Andrew NG的新书《Machine Learning Yearning》
我爱机器学习
11+阅读 · 2016年12月7日
Top
微信扫码咨询专知VIP会员