We formulate a new class of conditional generative models based on probability flows. Trained with maximum likelihood, it provides efficient inference and sampling from class-conditionals or the joint distribution, and does not require a priori knowledge of the number of classes or the relationships between classes. This allows one to train generative models from multiple, heterogeneous datasets, while retaining strong prior models over subsets of the data (e.g., from a single dataset, class label, or attribute). In this paper, in addition to end-to-end learning, we show how one can learn a single model from multiple datasets with a relatively weak Glow architecture, and then extend it by conditioning on different knowledge types (e.g., a single dataset). This yields log likelihood comparable to state-of-the-art, compelling samples from conditional priors.
翻译:我们根据概率流制定了一个新的有条件的基因变异模型类别。 经过最可能的培训,它提供了来自等级条件或联合分布的有效推断和抽样,不需要先验地了解类别数目或类别之间的关系。这允许我们从多种多类数据集中培训基因变异模型,同时保留比数据子集(如单个数据集、类标签或属性)更强的先前模型。在本文中,除了端到端学习外,我们展示了如何从一个相对薄弱的Glow结构的多个数据集中学习单一模型,然后通过对不同知识类型(如单一数据集)的制约加以扩展。这样可以产生与条件前数最先进的、有说服力的样本相近的日数。