We study approximation of probability measures supported on n-dimensional manifolds embedded in R^m by injective flows -- neural networks composed of invertible flow and one-layer injective components. When m <= 3n, we show that injective flows between R^n and R^m universally approximate measures supported on images of extendable embeddings, which are a proper subset of standard embeddings. In this regime topological obstructions preclude certain knotted manifolds as admissible targets. When m >= 3n + 1, we use an argument from algebraic topology known as the *clean trick* to prove that the topological obstructions vanish and injective flows universally approximate any differentiable embedding. Along the way we show that optimality of an injective flow network can be established "in reverse," resolving a conjecture made in Brehmer et Cranmer 2020. Furthermore, the designed networks can be simple enough that they can be equipped with other properties, such as a novel projection result.
翻译:我们研究了在射流中嵌入R ⁇ m的正维元柱子上支持的概率测量的近似值 -- -- 由不可逆流和单层投射元件组成的神经网络。当 m ⁇ 3n时,我们显示R ⁇ 3n和R ⁇ m 3n 支持的准值测量值在可扩展嵌入的图像上得到普遍近似值的测量值,这是标准嵌入的恰当子集。在这个制度下,地形障碍排除了某些结结结的元件作为可接受目标。当 m ⁇ 3n + 1 时,我们使用被称为 * 清洁戏 * 的代数表层表层学的论证来证明, 地形障碍消失, 和导出性流普遍接近任何不同的嵌入。 沿着我们显示, 一种预测性流网的最佳性可以“ 反向” 建立, 解决在布雷默 和 克兰默 2020 的预测性。此外, 设计中的网络可以简单到它们能够装备其他特性, 例如新的预测结果。