There has been much recent, exciting work on combining the complementary strengths of latent variable models and deep learning. Latent variable modeling makes it easy to explicitly specify model constraints through conditional independence properties, while deep learning makes it possible to parameterize these conditional likelihoods with powerful function approximators. While these "deep latent variable" models provide a rich, flexible frameworks for modeling many real-world phenomena, difficulties exist: deep parameterizations of conditional likelihoods usually make posterior inference intractable, and latent variable objectives often complicate backpropagation by introducing points of non-differentiability. This tutorial explores these issues in depth through the lens of variational inference.
翻译:最近,在将潜在变量模型和深层学习的互补优势结合起来方面开展了许多令人振奋的近期工作。 深层变量模型使得通过有条件的独立属性来明确指定模型制约因素变得容易,而深层学习则使得有可能用强大的功能近似器来参数化这些有条件的可能性。 虽然这些“深层潜在变量”模型为模拟许多现实世界现象提供了丰富而灵活的框架,但存在一些困难:条件可能性的深度参数化通常使后推推推法难以调,而潜在变量目标往往通过引入非差异性点而使反向反演变复杂化。这个指导性模型通过变异推理的透镜来深入探讨这些问题。