机器之心整理
参与:一鸣、杜伟
本周,ICCV 2019 最佳论文出炉,同时还有滴滴提出的语音预训练模型,FaceBook 的 BART 预训练语言模型等。
Pre-training only embedding matrix for new language is good enough for transfer learning - new paper from Deepmind
Omni-Scale Feature Learning for Person Re-Identification
SinGAN:Learning a Generative Model From a Single Natural Image
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
Deep Learning vs. Traditional Computer Vision
Seeing What a GAN Cannot Generate
作者:Mikel Artetxe、Sebastian Ruder、Dani Yogatama
论文地址:https://arxiv.org/pdf/1910.11856.pdf
作者:Kaiyang Zhou、Yongxin Yang、Andrea Cavallaro、Tao Xiang
论文地址:https://arxiv.org/pdf/1905.00953.pdf
项目地址:https://github.com/KaiyangZhou/deep-person-reid
作者:Tamar Rott Shaham、Tali Dekei、Tomer Michaeli
论文链接:http://openaccess.thecvf.com/content_ICCV_2019/papers/Shaham_SinGAN_Learning_a_Generative_Model_From_a_Single_Natural_Image_ICCV_2019_paper.pdf
项目地址:https://github.com/tamarott/SinGAN
作者:Mike Lewis 等
论文链接:https://arxiv.org/pdf/1910.13461.pdf
作者:Adji B. Dieng、Francisco J. R. Ruiz、David M. Blei、Michalis K. Titsias
论文链接:https://arxiv.org/pdf/1910.09932.pdf
作者:David Bau、Jun-Yan Zhu、Jonas Wulff、William Peebles 等
论文链接:https://arxiv.org/abs/1910.11626v1
项目地址:https://ganseeing.csail.mit.edu