We introduce manifold-modeling flows (MFMFs), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent data sets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates. With two pedagogical examples we demonstrate how manifold-modeling flows let us learn the data manifold and allow for better inference than standard flows in the ambient data space.
翻译:我们引入了多种模型流(MFMs),这是一种新的基因模型,既可以同时学习数据多重,也可以同时学习该元的可移动概率密度。 结合了正常流流、GANs、自动编码器和能源模型的各方面,这些模型有可能更忠实地代表数据集的多重结构,并且提供维度减缩、分泌和分配外检测的处理器。 我们争论为什么这些模型不应该仅仅以最大的可能性来培训,而提出一个新的培训算法,将多重和密度更新分开。 我们用两个教学例子来展示多元模型流如何让我们学习数据多重,并让我们比环境数据空间的标准流更能作出更好的推论。