Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision. However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN. In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework. An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network. The decoder proceeds in the reverse order of the encoder's composite mappings. A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation.
翻译:深基因模型,如Variational AutoEncoder (VAE) 和 General Aversarial Network (GAN) 等深基因模型在机器学习和计算机视觉中发挥着越来越重要的作用。然而,有两个根本问题阻碍着它们的实际应用:在VAE 中进行变异推断的困难和在GAN 中缺乏编码真实世界样本的功能。在本文中,我们提议了一个名为“内向可逆自动coder (LIA) ” 的新算法,在一个框架内处理上述两个问题。一个不可忽略的网络及其反向映射在VAE 的潜在空间中是不对称的。因此,部分编码器首先将输入转换成特性矢量,然后将这些特性矢量的分布进行改造,以适应不可倒置的网络的先前。解译器在编码的合成图的反顺序下运行。一个两阶段的不偏移性无孔化训练计划,目的是通过对称学习来培训LIAA,因为LIA的分解器首先被训练为GAA A AS AS AS ASlodial developal Instal Instalalal Net Net developing 3 数据网络中进行自动升级。