Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution. Here, we assume an unknown signal to lie in the range of some pre-trained generative model. A popular approach for signal recovery is via gradient descent in the low-dimensional latent space. While gradient descent has achieved good empirical performance, its theoretical behavior is not well understood. In this paper, we introduce the use of stochastic gradient Langevin dynamics (SGLD) for compressed sensing with a generative prior. Under mild assumptions on the generative model, we prove the convergence of SGLD to the true signal. We also demonstrate competitive empirical performance to standard gradient descent.
翻译:深基因模型已成为各种反面问题,如压缩感测、相继检索和超分辨率等问题的信号的强大前导。 在这里,我们假定一个未知信号存在于一些经过事先训练的基因模型的范围中。一种流行的信号恢复方法是在低维潜伏空间通过梯度下降恢复信号。虽然梯度下降取得了良好的实证表现,但其理论行为却不十分清楚。在本文中,我们引入了使用随机梯度梯度兰格文动态(SGLD)来将感测压缩为基因前导。根据对基因模型的轻度假设,我们证明SGLD与真实信号的趋同。我们还展示了标准梯度下降的竞争性实证表现。